While widely debated among politicians and ethically savvy people, the complexities around autonomous weapon systems (AWS) can quickly turn into a headache for engineers and the warfighter. It is well-known that interpretations of the concept vary, depending on culture, social group, political system, and even power politics.

Integrating certain levels of autonomy into weapons systems is accepted today, representing an unavoidable step to cope with overly complex processes and reaction times for a human, for example, when it comes to missile interception. The most contested aspect regarding the high degree of autonomy in a weapon system is when it is able to unleash its lethal effect without ‘meaningful human control’. Arguably, we would speak in this case about “Lethal Autonomous Weapon Systems” (LAWS) for which a commonly agreed definition does not exist, as recognised on the website of the United Nations Office for Disarmament Affairs (UNODA).

Defensive weapon systems that require autonomy for detection and engagement of incoming projectile threats, such as the pictured Phalanx close-in weapon system (CWIS) or ‘hard kill’ active protection systems seen on land vehicles, are generally accepted and not subject to (L)AWS controversies.
Credit: US Navy

From the perspective of international humanitarian law (IHL), LAWS are covered by the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW). In 2023, the Group of Governmental Experts (GGE) on Emerging Technologies in the Area of LAWS agreed “on the notion that weapons systems based on emerging technologies in the area of LAWS must not be used if they are incapable of being used in compliance with IHL.”[1] To remind the reader, the core principles of IHL are:

  • distinction between civilians and combatants;
  • prohibition on attacking those not directly engaged in hostilities;
  • prohibition on inflicting unnecessary suffering;
  • the principle of necessity;
  • the principle of proportionality.

CCW/GGE also published, an informative but non-exhaustive compilation of definitions and characterisations on emerging technologies in the field of LAWS.[2] While the compilation was aimed at facilitating the Group’s discussions, it seems to serve in many analyses as a reference for national interpretations of the concept of (L)AWS, and to highlight the differences in perspectives between countries.

Persistent ambiguities regarding a commonly-agreed definition can also imply that the freedom to design certain autonomy functions depends on national understandings, and that the industrial base – whether private or public – will likely follow the requirements derived from their (main) customer, as long as the IHL principles are respected.

Parameters in the characterisation of AWS

The majority of definitions highlight two main parameters: the degree of machine autonomy allowing the system to make decisions and act more or less independently; and the degree of human control, for example to:

  • approve the system’s decisions before acting;
  • reverse the system’s decisions to act, possibly even during execution.

Human control over AWS is particularly important when the target involves other human presence, especially civilians, who risk becoming collateral victims. More generally, human control is important because humans – endowed with natural cognitive abilities – are assumed to be better able to discriminate subtleties in changing situations, including receiving new orders. By contrast, machines tend to make decisions based on pre-programmed tasks, rules, and on the processing of available data. As such, an underlying parameter in the characterisation of AWS is the application of artificial intelligence (AI) and machine learning (ML), both key enablers for the performance of the system, including for its degree of ability to discriminate, judge and take action, especially against incoming threats that are too fast-moving for a human to be able to react.

The degree of human control is generally placed in three categories: human in, on, and out of the loop. Even the descriptions of these three categories tend to vary depending on which stakeholder describes them. For the purposes of this article, the explanations used by the US Congressional Research Service (CRS) to describe the ‘US Policy on Lethal Autonomous Weapon Systems’ are used as a reference.

GA-ASI’s Gray Eagle Extended Range (GE-ER) UAV is a further development of the MQ-1C Gray Eagle, and features an automatic take-off and landing system (ATLS) that allows the aircraft to be launched and recovered without any operator interaction.
Credit: GA-ASI

On the other hand, AI and ML could also enable autonomy beyond the possibility of human control and agency, making machine decisions irreversible and/or highly unpredictable. As a result, some regulatory and ethical aspects regarding LAWS become a more specialised extension of the AI/ML frameworks. However, it is worth mentioning that “AI is not a prerequisite for the functioning of autonomous weapons systems, but, when incorporated, AI could further enable such systems. In other words, not all AWS incorporate AI to execute particular tasks”, as clarified on the UNODA website.

While general agreement seems to prevail on the aforementioned parameters, nuances in scope, as well as in the applicable AI policy and ethical frameworks, may lead to different technological paths for AWS, with consequences beyond the issue of LAWS alone.

Trends of major players

Likely fostered by the competition between them, the US and Chinese definitions (or, rather, ‘conceptualisation’) of (L)AWS tend to be the most emphasised by the followers of this debate. Both countries admit the possibility of autonomous weapons, including LAWS, therefore a total ban is not envisaged.

Thanks to existing policy, the US approach is relatively clear in the distinctions it makes between ‘autonomous’ and ‘semi-autonomous weapons systems’ (SAWS), though it is slightly unclear where the ‘lethal’ (LAWS) category is applicable.

The US Department of Defense (DoD) Directive 3000.09 on ‘Autonomy in Weapons Systems’ provides formal definitions serving the purpose of the directive:

  • An AWS is “a weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised AWS that are designed to allow operators to override the operation of the weapon system, but can select and engage targets without further operator input after activation.”
  • A SAWS is “a weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by an operator. This includes: weapon systems that employ autonomy for engagement-related functions (….), provided that operator control is retained over the decision to select individual targets and specific target groups for engagement”.

The DoD directive is applicable to both lethal and non-lethal, kinetic and non-kinetic force by AWS and SWAS. The views on LAWS, more specifically, could be derived from the joint position submitted to the UN CCW/GGE (1/2023/WP.4) by Australia, Canada, Japan, the Republic of Korea, the UK, and the US, where lethality appears to be a sub-set of “sophisticated weapons with autonomous functions” (potentially enabled by AI), which include “those weapon systems that, once activated, can identify, select, and engage targets with lethal force without further intervention by an operator”.

DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) programme developed an unmanned vessel to track quiet diesel-electric submarines, exploring the performance potential of a surface platform conceived under the premise that a human is never intended to step aboard.
Credit: DARPA

Describing the US Policy on LAWS, the US CRS characterises them around the idea of a sub-set: a “special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system”.

The role of the human operator in target selection and engagement decisions is a particularly important parameter in the US interpretation since it is used to distinguish complete autonomy from other forms of autonomy. According to the US CRS paper, semi-autonomous weapons correspond in the US policy to the category of ‘human in the loop’, meaning weapon systems that “only engage individual targets or specific target groups that have been selected by a human operator”. These can also include “fire and forget” weapons, such as certain types of guided missiles which deliver effects to human-identified targets using autonomous functions.

‘Human-supervised’ or ‘human on the loop’ AWS are placed one degree of autonomy higher, meaning that though autonomous, operators still “have the ability to monitor and halt the weapon’s target engagement.” ‘Full autonomy’ is represented by ‘human out of the loop’, meaning a “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.” Without official confirmation for this, it seems that LAWS are only associated with the category of ‘full autonomy’.

Contrary to what some critics suggest, the DoD Directive implicitly admits AI is an important enabler and stresses the importance of compliance with the DoD’s ‘AI Ethical Principles’ and their ‘Responsible Artificial Intelligence Strategy and Implementation Pathway’ in the design and the development of AWS.

Besides being an official source for the US approach on AWS, DoD Directive 3000.09 is primarily a document establishing policy and assigning responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems. It defines guidelines to minimise the probability and consequences of failures in these systems, as well as rigorous procedures that must be applied for:

  1. The design, verification and validation, and testing and evaluation of AWS and SWAS;
  2. the types of approval processes, often very complex, that are necessary for staring design and development of AWS and SWAS;
  • the approval processes for any modification of an existing system.

The DoD Directive is a functional document that helps engineers and decision-makers to clearly understand what they can/cannot, should/should not do along the entire process of AWS design and development. It is also built upon the US ethical framework, but design and development are not only left to ethical interpretations.

Blowfish A3 is a rotary-wing reconnaissance and attack UAV, part of the Blowfish family developed by the Chinese company Ziyan UAV. Thanks to an optional AI module, Blowfish A3 can automatically identify and track targets, to allow engagement of moving targets.
Credit: Ziyan UAV

In the context of CCW discussions, China makes a distinction between ‘acceptable’ and ‘unacceptable’ AWS. In the first category, the weapons could have a high degree of autonomy, but are always under human control, can be suspended by the human and, therefore, are deemed or expected to comply with basic IHL principles. The unacceptable AWS should include, but not be limited, to a sum of characteristics such as:

  • Lethality conferred by the payload;
  • absence of human intervention and control during the entire process of executing a task;
  • irreversibility of the mission;
  • indiscriminate killing regardless of conditions, scenarios and targets;
  • the possibility for the system to learn autonomously and thus to evolve, through expanding its functions and capabilities in a way exceeding human expectations.

The last characteristic hints to the possibilities offered by AI and ML. In the CCW context, China has given LAWS the same five basic characteristics as the ‘unacceptable ‘AWS.

The Chinese concept of AWS differs from that of the US in that it is constructed around ethical arguments (acceptable/unacceptable) and does not revolve around measurable parameters such as the degree of human control. Moreover, the concept of LAWS is narrowed down to a sum of basic characteristics, raising the question whether the lack of one would ‘disqualify’ the system from being a LAWS. As such, the design characteristics/requirements for LAWS are very narrow, and therefore difficult to meet as a sum, but the design characteristics for other (acceptable) AWS are very wide in scope.

The inclusion of the evolution/autonomous learning characteristic, hinting at the opportunities and risks of AI, is an interesting aspect of the Chinese understanding of LAWS. Before the 2017 ‘New Generation AI Development Plan’, aiming to transform China into the AI world leader by 2030, the 2016 ‘Notification on National S&T Innovation Programs for the 13th Five-Year Plan’ introduced the notions of “brain-inspired computing” and “brain-computer intelligence.” The China Brain Project adopted in 2016 implements brain-inspired AI research which seeks to (mathematically) describe the brain processes contributing to behaviour, to develop brain mappings and brain-computer interfaces.

In terms of system design, the combination between the various facets of the Chinese AI strategy, and the trends hinted in the ‘2019 Defense White Paper’[3] – that war is evolving towards “informationised” and “intelligentised” warfare – could lead us to think of a concept of “post-AWS” where, based on sophisticated cognitive processes replicating human brain processes, AWS:

  • Are capable of capturing and understanding obvious or subtle changes in the environment, are able to better discriminate between the various types of targets and engage them only under proper conditions, and, if needed, reverse the mission themselves;
  • can better team up with the human operators via exponentially improved human-machine interfaces (HMIs), allowing human and machine to take collaborative decisions but leaving ultimate agency to the human.

In such a scenario – which is only imagined by the author – the five basic characteristics of unacceptable AWS (or LAWS) would be even harder to meet as a sum.

It is worth mentioning that, in the CCW context, the Russian Federation characterises LAWS as “a fully autonomous unmanned technical means other than ordnance that is intended for carrying out combat and support missions without any involvement of the operator.” It is noticeable that support missions are also included, and that the targeting function is not specifically addressed. Russia stresses that the “issue [of LAWS] pertains to prospective types of weapons” and their definition should “contain the description of the types of weapons that fall under the category of LAWS” (…) “not be limited to the current understanding of LAWS, but also take into consideration the prospects for their future development”, and be “universal in terms of the understanding by the expert community”.

ZALA Lancet is an UAV and loitering munition, or kamikaze drone, developed by Russian company ZALA Aero Group. Lancet is estimated to be the primary loitering munition used by Russia in the War in Ukraine and to have inflicted significant damage to the Ukrainian equipment and crews.
Credit: Nickel Nitride, via Wikimedia Commons

At the opposite end of the Chinese narrowed characterisation, the Russian concept is broad, with AWS design possibilities mostly restricted by the designer’s imagination.

The EU is known to have one of the strictest regulatory and ethical frameworks, not only regarding LAWS, but autonomy and AI in general. The 2018 European Parliament resolution on AWS[4] refers to LAWS as “weapon systems without meaningful human control over the critical functions of selecting and attacking individual targets”. The resolution called for relevant EU bodies to develop and adopt a common position on LAWS that ensures this meaningful human control, and to work towards the start of international negotiations on a legally binding instrument prohibiting LAWS. The EU Statement made at the 2023 CCW meeting of the High Contracting Parties is also built around the notion of human control. The human must make the decisions regarding the use of lethal force, exert control over the weapons and remain accountable for these decisions. It is also stated that the future GGE mandate “should contain concepts that enjoy widespread support, including [a] so-called ‘two-tier’ approach to weapons systems in the area of LAWS.” The notion of a two-tier approach was introduced in 2023 in the CCW/GGE discussions and suggests that certain AWS will/ should require prohibition and all others regulation. Several countries seem to support this approach, but it remains to be seen if any agreement will be reached on the types of LAWS that should be prohibited.

The EU framework does not prevent variations of interpretation at member state level, especially when defence remains mainly a national competence. Nonetheless, the convergence of views between them was proven by the common position in CCW/GGE in 2022 by Finland, France, Germany, Italy, The Netherlands, Spain and Sweden, as well as Norway, a country that participates in many EU research programmes. This 2022 joint position went in the direction of a two-tier approach.

The European Defence Fund (EDF) excludes the development of what the EU understands as LAWS from its funding actions, but funding for early warning systems and countermeasures for defensive purposes can be envisaged. Anyone acquainted with the EDF process is aware of the complexity of the ethical assessment regarding autonomy and AI functions. At the submission stage, this process was thankfully simplified through a questionnaire and a range of reference documents. Whereas an ethics evaluation is an absolute necessity, it remains a process of elaborate interpretation based on ethics references. For engineers, who are typically used to respond to a set of design functional requirements, and to take structured and controlled steps based on system engineering and quality assurance standards, a document similar to the US DoD Directive 3000.09, but adapted to EU concepts, would probably be even more welcome.

Sea Baby is a multi-purpose unmanned surface vehicle (USV) developed for use by the Security Service of Ukraine (SBU). It is reported to have been used for the first time in the July 2023 attack on the Kerch Strait Bridge connecting Crimea to mainland Russia.
Credit: SBU

Design considerations

Taking the analysis above into consideration, and despite some differences, there is agreement at the transatlantic level that human control is an important parameter for the classification of autonomy in weapons. On the other hand, LAWS are understood through the filter of two parameters: machine autonomous identification, selection and engagement of human targets, combined with an impossibility for the human to decide on the engagement, or to reverse it. Precisely to avoid a loss of human control and to better understand machine behaviour (including unpredictable deviations) several actions are necessary, and these include:

  • Research to define more granular levels, or aspects, of human control, beyond the notions of in, on and out of the loop. Such research shall not be only driven by ethics considerations, and even emotions, but it shall be based on technical realities and rigorous testing and configuration of management processes. Based on this, research shall also be performed into modalities of improving human control despite complexity, as well as into developing trusted AI, which is able to integrate human inputs in real time.
  • The quality assurance standards must be constantly updated to reflect evolution of automation in conjunction with evolution of human control capacity.
  • In a context where multi-domain operations are expected to increase in complexity, an elaborate concept of ‘modular autonomy’ may be needed, whereby certain autonomous systems can be given the possibility to function autonomously in certain situations but be reconnected to the architecture and the human-led command and control chain when required. Such a concept goes in the direction of a ‘reverse-Matrix’ scenario where the machine is connected or disconnected, rather than the human.

Conclusion

From an IHL perspective, Russia’s war of aggression against Ukraine has shown how much indiscriminate damage can be inflicted by armed platforms which dispose of very basic autonomy functions, such as loitering munitions. Principles of IHL do not state that it is preferable to have a human making decisions, compared to a high-performing AWS which, thanks to its sensing superiority, is capable of precise and discriminate targeting. The main difficulty with AWS, especially LAWS, is the issue of agency and responsibility in case of technical, operational or strategic failure. It should be understood that more autonomy does not necessarily mean less human control, and enhanced capacity to interact with the machines does not necessarily mean superhumans. It is just that, at this point in time, we cannot say clearly who should be held responsible.

Manuela Tudosia

 

[1] https://disarmament.unoda.org/timeline-of-laws-in-the-ccw/

[2] https://docs-library.unoda.org/Convention_on_Certain_Conventional_Weapons_-Group_of_Governmental_Experts_on_Lethal_Autonomous_Weapons_Systems_(2023)/CCW_GGE1_2023_CRP.1_0.pdf

[3] Government of China, China’s National Defense in the New Era Information Office of the State Council of the
People’s Republic of China (July 2019).

http://www.gov.cn/zhengce/2019-07/24/content_5414325.htm

[4] 2018/2752(RSP)