Artificial intelligence techniques are gradually being adopted in tactical battle management systems bringing both risks and rewards.

In March 2019, Dr Zachary Davis penned a paper titled Artificial Intelligence on the Battlefield: An Initial Survey of Potential Implications for Deterrence, Stability, and Strategic Surprise. Dr Davis is a senior fellow at the Centre for Global Security Research at the Lawrence Livermore National Laboratory (LLNL) in California. He is also a research professor at the US Navy’s Naval Postgraduate School in the same state. Davis began his paper with a definition of artificial intelligence (AI), describing it as a loose collection of phenomena associated with exploiting computers which in turn exploit ‘big data’. He then provides a succinct and useful definition of AI as “algorithms that are the basis of pattern recognition software.” These algorithms are combined with “high performance computing power.” This convergence allows data scientists to “probe and find meaning in massive data collections.”

AI, ML and tactical command and control

What is the attraction of incorporating AI approaches into tactical Battle Management Systems (BMS)? “The goal, at least in the US context, is to contribute to attaining notions of ‘decision advantage/dominance’ that are becoming increasingly important objectives,” said Dr Ian Reynolds, doctoral fellow at the Internet Governance Lab and a Research Associate at the Centre for Security Innovation and New Technology, both at the American University in Washington DC. “Such systems could leverage vast amounts of relevant data to help generate courses of action, uncover patterns, identify when adversary behaviour diverges from established patterns, and accelerate ‘sensor to shooter’ timelines.”

Davis argued in his paper that “AI applications at the operational level of war could have a very significant impact on the use of general-purpose military forces to achieve tactical objectives.” Patrick ‘Krown’ Killingsworth, Director, Autonomy Product at EpiSci, said that “at EpiSci we see our AI systems augmenting the human decision-making process in (battle management, and command and control domains). At all levels of warfare, the goal is to relieve human cognitive workload.”  Killingsworth continued, “(a)t the tactical level this information would augment a commander’s knowledge and experience, which could be by using AI to generate potential courses of action or display likely outcomes of the current status of forces.” Likewise, AI approaches could help with drafting written materials, said Henrik Sommer, Director, Land Strategy at Systematic: “Developing and writing plans and orders can be sped up through the use of AI technologies.” Accelerating drafting processes is something that the civilian world is increasingly embracing vis-à-vis AI. Witness the uptake of AI-based ChatGPT Large Language Models (LLMs) for everything from drafting press releases to writing comedy Country and Western lyrics about Smurfs.

A step change in the quantity of data that militaries are expected to process at the tactical level is anticipated in the future. AI holds promise in helping to collate this deluge of information.
Credit: GlobalMil

One key potential application for AI is to sift through disparate incoming intelligence feeds received by tactical commanders at their headquarters. Sorting through torrents of disparate data is something that Killingsworth feels AI is particularly suited for: “For tactical level operators, there is the possibility for them to enhance their intelligence gathering capabilities through the delivery of content such as video feeds, imagery and seized documents etc.,” said Sommer. This intelligence content “can then feed into the intelligence operations undertaken at higher echelon formations using AI technology.” Commanders receive information from a range of sources as diverse as human, imagery and signals intelligence. It is entirely possible that most of this incoming intelligence may have peripheral relevance to their mission, but relevant information must be found, interpreted and distributed to those who need it. AI may have a role to this end. For example, algorithms could help sort and prioritise disparate items of intelligence that may seem to have no connection to one another at first blush. Nonetheless, these seemingly unconnected data could make sense when converged. Using AI to aid the treatment of intelligence could have a corresponding benefit of assisting targeting and perhaps improving precision. Training algorithms to recognise specific targets could accelerate the pace with which aimpoints can be detected and located.

This ability to sort and understand information can assist commanders with tactical decision-making: “Situational awareness can be improved for tactical operators through the automation that AI can bring,” Sommer noted. “For example, an intelligence system undertaking change monitoring tasks using computer vision can identify the presence (or lack thereof) of enemy assets and add this to the common intelligence picture.” This common operating picture can then be shared with users in the field. Artificial intelligence can identify and task relevant intelligence, surveillance and reconnaissance (ISR) assets. Once the assets perform their missions, effectors can be tasked to engage targets. AI could even be used to aid the process of battle damage assessment.

Risk factors

An important note of caution, however, is that AI approaches embedded in battle management systems are not panaceas. As the technology stands at present, it is at best a set of tools which can aid decision-making. To be fair, experts such as Killingsworth do not see AI replacing the human in the command and control chain: “The human is still ultimately responsible for execution and conduct of the battle.” Nevertheless, AI-enabled tactical battle management systems could provide “the commander in the field (with) more information at their fingertips.” This information could “include the best AI predictions about how many potential decisions may play out.” Questions regarding the involvement of AI in the kill chain may be more complex: “Such systems are not ‘lethal’ in the sense that an AI-enabled system will complete the full cycle of target identification and engagement without human intervention,” noted Reynolds. However, “they are lethal in the sense that they shape the parameters of choice and the speed at which those choices occur, shaping what a commander might perceive as possible or desirable based on output of the BMS.” Reynolds continues that these factors risk making “meaningful (and) appropriate human control tenuous. This poses serious ethical questions as well as questions regarding how such systems cohere with the laws of war.”

Instead, the debate may need to be rethought: “We should think of AI-enabled decisions less in terms of a binary of human in control or not, and more in terms of complex, unfolding processes, in which elements of human agency are delegated to AI systems.” Davis shared his personal view emphasising that these do not represent the views of the LLNL, the US Department of Energy, US National Nuclear Security Administration, the US Department of Defense (DoD) or any other US government agency. He highlights a further concern: “The speed that is the goal of AI on the battlefield may be inconsistent with other important considerations in war, including diplomacy. Faster is not always better.” Nonetheless, he is optimistic regarding questions concerning human interaction with AI in the crucible of conflict: “The fear that commanders will turn life and death decisions over to automated systems is unsupported. No commanders want such intrusion into their authorities. That provides inherent guardrails.”

The increasing uptake of uninhabited systems on the battlefield, and their dependence on AI, is bound to further shape the debate regarding the military reliance on this latter technology. It also raises questions regarding how different AI-based systems will interact with one another.
Credit: US Army

It is also important to guard against an overreliance on AI-enabled battle management systems. Killingsworth warns that “commanders in the field (can) risk becoming reliant on these immensely powerful tools and may face decision paralysis without them.” It will be essential that commanders still know how to fight and win their battles when the attributes that AI bring are unavailable. Ensuring that the military trusts the technology is essential if it is to enjoy widespread adoption: “EpiSci takes great lengths to make sure all our AI is bounded and assured, with a large emphasis on operator trust …  none of these tools will help strengthen our national defence if they aren’t accepted by the operators.”

The extent to which AI approaches in tactical BMSs can assist is dependent on the quality of the data used to train the algorithms. Our own brains learn because our lives are a series of experiences we store as memories which then influence our behaviour. Militaries, particularly those in NATO, thankfully go to war comparatively rarely. While this is good for peace, it does result in a dearth of operational or tactical data which can be used to train algorithms. That data which are used to train the algorithms must be trustworthy. ‘Garbage in, garbage out’ (GIGO), is an oft-quoted computing truism, but it remains highly applicable. Aligned to this problem is the fact that data itself may become a Clausewitzian centre of gravity in future wars; a target as lucrative as a hostile headquarters. “Data are fragile,” warned Davis: “Part of the new era of cyber war is to attack adversary data bases, which is actually easy and could hijack all the hoped-for benefits.” One way to tackle this problem is to decentralise data, Davis recommended: “Overreliance, especially on centralised data bases and decision support, could lead to disaster.” Building in redundancy into where and how data are stored is paramount.

As Reynolds observed, war remains an inherently complicated affair: “AI-enabled systems tend to suffer in performance when tasked with dealing with complex and contingent situations.” He cited the problems that self-driving cars have had in reaching required levels of competency. “Since war is inherently complex, and involves constant efforts to deceive adversaries, there remain serious questions as to how AI-enabled battle management systems will contend with such situations.” It is axiomatic that AI-based command and control architectures must also be secure against hacking and cyberattack: “In terms of technical disruptions, data could be ‘poisoned’, or altered by adversaries, and sensing systems could be jammed, which could impact a BMS’s prediction and decision support capabilities,” Reynolds warned. “As a non-technical example, alterations to the physical environment that are unexpected for an AI-enabled system may lead to incorrect classification of an object (say a tank or enemy combatant).” As the wastelands of eastern Ukraine show, war has a habit of dramatically altering the physical environment.

Palantir’s Artificial Intelligence Platform for Defence was unveiled in 2023 and relies on a large language model architecture to aid command and control.
Credit: Palantir

Cost issues are also highly relevant. “Computing power and memory can limit the deployment of AI technologies,” warned Sommer. “Militaries have to make the decision about how much investment they can make in sending higher powered computers forward, and into environments where they may be easily damaged, destroyed or potentially fall into enemy hands.” Available electricity is another concern, Sommer added: “The technology that runs AI needs a lot of power … That said, aircraft and ships possess the physical space and electrical generation capability to host artificial intelligence technology natively.”

It’s happening now

AI-enabled battle management systems are on the horizon. “We see militaries incrementally adopting these … tools, and that is starting today,” said Killingsworth. Industry is taking note. In 2023, Palantir unveiled its Artificial Intelligence Platform for Defence which uses an LLM approach. Large language models are often discussed in the context of AI. Put simply, they use AI software to generate text. The popular ChatGPT software uses the LLM approach. Commanders can interrogate the software to receive guidance on courses of action based on events that have occurred, or are occurring. Personnel can request relevant intelligence or information; they can order a specific course of action to be performed that the software can then convert into written orders. Palantir were approached during the preparation of this article but did not respond to several inquiries.

The onward march of the Multi Domain Operations mindset in the US military, and its JADC2 manifestation, places a premium on the efficient collection, treatment and movement of data. It is inevitable that JADC2 will place a high premium on the use of AI.
Credit: L3Harris

Moreover, the US Department of Defense’s Joint All-Domain Command and Control (JADC2) architecture will have substantial AI content. JADC2 is the manifestation of the US Department of Defense’s Multi-Domain Operations (MDO) doctrinal mindset. MDO advocates the inter- and intra-force connectivity of all military assets (personnel, platforms, weapons, sensors, capabilities and bases) at all levels of war. The goal of MDO is to perform better quality decision-making at a faster pace than one’s adversary. MDO place a premium on the collection, interpretation and dissemination of eye-watering levels of data. It is inevitable that AI approaches will be intrinsic to these processes as data quantities may overwhelm human cognition. Likewise, Scale AI is working with the US Army to see how the company’s Donovan LLM could aid the service’s JADC2 contribution.

While the technology moves forward, preparations will have to be made for the absorption of AI in the tactical domain: “There will need to be cultural, doctrinal, and possibly socio-politico changes in militaries before it can be adopted at the tactical point,” Sommer observed, adding that, “As time passes and some of these technologies become more mainstream in the civilian world, then the role of digital natives in the military organisation may ease the adoption of AI and ML technologies.” Ultimately, Davis believes that “Critical assessments, widespread understanding of the limitations, and extensive testing and simulation should enable military leaders to get what they need out of AI tools without endangering their troops, the public or their mission.”

Thomas Withington