Print Friendly, PDF & Email

As sensor fusion and AI begin to enter the networked warfighting space, armed forces have been experimenting with and implementing some of the new possibilities these technologies can provide. 

ERCA Test Bed_(US Army; Ana Henderson)
Caption: The US Extended Range Cannon Artillery programme was designed to develop long-range precision strike capabilities from a tube artillery system and increase the quantity of effectors capable of exploiting the benefits of sensor fusion.
Credit: US Army/Ana Henderson

In 2020 a data centre at Joint Base Lewis McChord in the US began to receive data from a number of low earth orbit satellites and an MQ-1C Grey Eagle drone. The data was processed with the assistance of artificial intelligence (AI) before being passed onwards to Yuma Proving Ground. There, a targeting solution was created and passed to a specially designed howitzer, which loaded an XM1113 rocket assisted projectile fitted with a precision guidance kit (PGK), and fired. The whole process reportedly took less than 20 seconds.[1] No forward observers, no tactical drones, no radars – except for the ones used to track the guided munitions. As it happened, the XM1113 projectile landed near the target but failed to detonate, but that was sort of the point. This engagement was not part of a live conflict, but an element of Project Convergence, the US Army’s experimental framework designed to explore multi-domain operations (MDO). It was unique because a long-range engagement was conducted using only space-based and strategic intelligence, surveillance, target acquisition and reconnaissance (ISTAR) assets. The data was interpreted and analysed with the help of AI and used to carry out an attempted engagement.

This approach may reflect the future of ISTAR for land warfare. Not so much the specifics – but the fusion of multiple sensor outputs into a coherent operational picture to generate non-traditional targeting cycles.

The nuts and bolts

“Fusing information and then taking action has always been key to winning on the battlefield. It’s just that in the past, these sensors were human beings, information was passed verbally or in writing, and the fusion happened inside a commander’s head,” Will Blyth, co-founder and CEO of Arondite, a defence tech start-up building AI and autonomous systems told ESD via email. “The emerging ISTAR paradigm shifts the human up the value chain” he added, which means that the outputs from multiple external sensors can be fused into a single operational picture by AI and then presented to the human.

This is one of the core challenges of driverless cars, which must take the inputs from multiple sensors of different types and fuse them into a single understanding of the surrounding world. The car can then make decisions based on its understanding of that world. While it is likely that a driverless car will receive sensor data from external systems – a satellite navigation network for instance – the bulk of the computing will be done on the vehicle with its sensors. It is, in that sense, computing at the edge. Military systems will always require the outputs from multiple external sensors to be fused into a single operational picture. This requires design decisions about where computing should be done. In the ISTAR chain of direct, collect, process, disseminate; AI and sensor fusion are applicable to all stages.[2] However, the primary consideration here will begin with collect, and proceed through process, and disseminate.

For example, it is possible to install a computer loaded with AI algorithms onto a drone. As the drone records imagery of the world around it, the algorithms would go to work on that imagery feed and generate outputs. The algorithms might be trained for image recognition or navigation, for example, and feed that data back to the operator on the ground. It is also possible to provide that data onwards to another system, a fire control system for example, which is where the application becomes exciting. An alternative solution is to retain the AI computing power away from the edge, on the ground control station of a drone, or at a command post, for example. This allows for greater computing power but may require more communications bandwidth to transmit live video data or signals intelligence before the computing is carried out and outputs created.

The IBCS from Northrup Grumman shown here was designed to provide a sensor fusion capability to air defence. It has been tested using various radars and sensors from the air and land. It is in production for the US Army and various other users.
Credit: PEO Missiles & Space

There is also the question of algorithms. It is likely that most military applications will require multiple sets of algorithms trained for different purposes. For example, a drone fitted with a camera and an edge-processing capability might have one algorithm for image recognition, which has several applications such as identifying camouflaged vehicles, or artillery flashes. It may feed its outputs to a sensor fusion algorithm that compares and collates the information it receives from the drone’s sensor before distributing that information to other systems. The same camera and computer may include a navigation algorithm, which uses terrain matching and image recognition to understand where it is, and where it is going in a GNSS-denied environment. Combined, several algorithms can be used to autonomously generate targeting coordinates for an effector.

But why?

AI is often touted as something between the necessary future of defence, and a solution for all of defence’s problems. The reality is that it is probably something between these two extremes. It can generate new capabilities by helping to fuse sensors and shooters ever closer together. It can also solve challenges presented by declining force and equipment levels; where in the past a human team would be sent for reconnaissance, a machine can be sent in their place and AI used to bring all of the outputs together.

For land ISTAR, sensor fusion offers new capabilities such as the use of space-based assets for target engagements. It can also be used to generate different routes to an outcome. For example, the US Integrated Battle Command System (IBCS) demonstrated an air defence capability that combined a PATRIOT battery with AN/TPS-80 G/ATOR radars from the USMC and two F-35s.[3] The system shared targeting data between its assets, something not normally possible, and conducted a successful intercept against a cruise-missile like target. In theory, if this kind of technology were realised at scale, it could unlock the potential to include almost every sensor on a battlefield in an air defence network and fuse the bewildering array of outputs that would result into a single air defence picture. “By shifting the human up the value chain, you reduce cognitive burden, but there is also a requirement to place our values at the heart of how we deliver this chain,” Blyth explained. “This means ensuring that development of AI for the battlefield needs to be explainable and auditable, retaining the human as the decision maker,” he added.

A Leleka-100 drone carried by a soldier near Avdiivka. Ukraine has brought a trend to the fore that has been emerging for some time, the spread and proliferation of tactical reconnaissance assets. This increases the potential benefits of sensor fusion.
Credit: АрміяInform, via Wikimedia Commons

In surface-to-surface applications, some challenges of ground-based observation include duplication or ‘double-counting,’ and developing a shared understanding across a battlespace. With AI, it is theoretically possible to identify a platform and maintain an understanding of that platform between sensors, thereby minimising the risk of duplication. For example, a T-72B3 emerging from a forest could be identified by one drone, which would share its understanding of that tank with an armoured reconnaissance vehicle through a battle management system. As the tank came into view of the reconnaissance vehicle and its sights, the tank would be re-identified and confirmed to be the same tank that the drone had spotted earlier. This understanding could be shared across a formation using pixel-sized identifiers that only AI can spot. In an alternate scenario without sensor fusion, it is possible that this tank would be identified and reported twice as two different vehicles, which complicates the task of establishing situational awareness.

Edge computing capabilities, with a small AI-enabled computers installed on each platform within a formation would enable this kind of capability and create a shared understanding of the battlespace and an adversary’s movements between each operator as well as reduced duplication of reporting. There are many applications for this type of sensor fusion; consider for example the challenges of operating in Ukraine or Afghanistan. In both conflicts, the force footprint is and was relatively low for the area of operations.

RUSI estimates from February 2024 indicate that there are some 470,000 Russian troops in Ukraine, while on 9 September 2023, Ukraine’s Defence Minister Rustem Umerov claimed a total of 800,000 members of the Ukrainian armed forces – though it is estimated the majority of are not deployed on the front line. These are ostensibly large numbers, but very few when considered against the 1,000 km long frontline.[4] The frontline numbers for Ukraine may be closer to 200,000.[5] Deployments are often conducted at a section-level, with 10 – 15 personnel occupying and operating over a frontage of a few kilometres. It is not possible for a small section to control an area this large in a high intensity war. It seems likely that the extreme dispersion in Ukraine is at least part of what drives the mass use of drones – one for every section.[6]

However, the way that the data collected by drone operators is fused together is often very slow. It may include team calls on a virtual meeting suite and the cumbersome sharing of targeting data through extraction of screen shots. To reduce this time lag, it seems that drone operators are often co-located with an artillery system to provide real time fire adjustment. It works, but the process could be more efficient.

In Helmand province, Afghanistan, the British Army’s peak strength was around 10,000 personnel, to patrol and contest an area of 58,560 km2. Helmand represents an area slightly larger than Croatia, populated by 1.4 million people.[7] The UK’s troops were routinely split up into small sections and deployed to isolated forward operating bases. They made extensive use of fixed-wing and rotary-wing air power to provide firepower that compensated for the lack of mass. Artillery also played a key role and armoured rapid reaction forces were maintained to intervene in the event of a contact that involved multiple casualties, or one that could not be resolved by infantry and airpower alone. This did not help to make the battlespace any smaller, however, and the British forces filled the gap with a rapid and extensive expansion of ISTAR assets. The Hermes 450 drone was deployed alongside MQ-9s and larger surveillance assets like the Shadow R1 (a modified Beechcraft King Air 350CER) as well as space-based assets and static surveillance balloons.[8]

Forward observers have been important for artillery fire control since indirect fire was first realised. However, as sensor fusion capabilities are realised, the nature of their role may change significantly.
Credit: RLW-E, via Wikimedia Commons

Many of these systems flew almost continuously from the moment they were deployed generating thousands of hours of footage that had to be processed and analysed – up to nine Hermes 450s were eventually deployed, they had flown 86,000 hours by 2014.[9] Their findings would be communicated manually over radio and in some cases the video footage itself would be beamed directly to troops on the ground using ROVER terminals. The British armed forces required an extensive communications network and spent millions on satellite communications for the duration of their deployment to Afghanistan as a result.

In both scenarios, it is possible to see that sensor fusion with edge-based AI would provide uplifts in situational awareness by automatically combining the outputs of multiple sensors from several domains. As an additional bonus, AI can condense its outputs into metadata packets that are much smaller than live-streamed video and can be distributed across a battle network more easily requiring less bandwidth.

Leading by example

Under Project Convergence the US Army has experimented with FIRes Synchronisation To Optimise Responses to Multi domain operations (FIRESTORM), an AI decision agent used to process huge quantities of data and provide targeting recommendations to a human operator in tenths of seconds. The same process without AI would supposedly take a human tens of minutes.[10] It is reportedly capable of maintaining a clear understanding of the operational picture and matching sensors to shooters. FIRESTORM does not work alone, it receives data that has been turned into a common language by Rainmaker, another AI algorithm that works across all sensors and the communications network to ensure that data is received in a format that can be interpreted and processed by a machine. Rainmaker may also play a role in rebuilding the communications network to ensure that data can progress through a mesh network should a node be degraded through jamming.[11]

It is understood that Rainmaker then forwarded data to Prometheus, another algorithm responsible for finding targets within Rainmaker’s data. Identified targets are then forwarded on to FIRESTORM, which matches the targets to shooters based on one of a number of decision-making protocols. In Convergence 2020, it found six sensor-shooter combinations, this had expanded to 21 by the time the exercise was conducted in 2021. A final algorithm – SHOT (Synchronized High Optempo Targeting) was used to assign the target to a sensor-shooter pair and disconnect all other pairs so that further target engagements could be undertaken. Prometheus was also used at this point to conduct battle damage assessment.[12]

The Israeli Defence Forces have at least two systems for sensor fusion. The strategic ‘knowledge factory’, which ingests intelligence from all of the country’s services and employs AI algorithms such as Gospel to analyse it.[13] Gospel is actually the final algorithm, it provides targeting recommendations to a human operator in much the same way as FIRESTORM does for the US Army. Other algorithms are used to fuse and analyse the data before it reaches Gospel. It is said to be capable of generating 200 targets in 10 – 12 days, which is around 50 times faster than a team of 20 analysts doing the same work. Another system is more tactical and known as Fireweaver. Fireweaver connects sensors and shooters in a network together in an integrated sensor-to-shooter system. It provides a commander with targeting recommendations based on the positioning and effects of every shooter. Gospel and its supporting algorithms have been used operationally in Gaza; it is also likely that Fireweaver has been deployed too.[14]

The F-22 was the first operational aircraft to combine supercruise, supermanoeuvrability, stealth and notably sensor fusion in a single weapons platform. It is now joined by the F-35 and the US is exploring various means of introducing greater sensor fusion into its armed forces.
Credit: USAF/Staff Sgt Allison Payne

In the UK, sensor fusion for land ISTAR is to be realised through project ZODIAC. A contract for ZODIAC was signed with Roke at DSEI 2023 that will cover two further years of ZODIAC delivery. Roke is the delivery partner for the programme, which is expected to provide an underlying systems architecture, that will be used to ingest and fuse data from a variety of sensors. The system is expected to be capable of taking data from all sensors, analysing it, and distributing the resulting intelligence to battlefield users across all domains. It will also provide the foundation for the British Army to deploy AI in its efforts to interpret and understand data.[15]

“For most militaries, AI will be spirally developed and integrated into existing kit, as well as integrated into future product. This will require a closer more innovative approach between primes, defence tech companies and the frontline,” Blyth said. Some, like the US and Israel, have started early and gained a lead in sensor fusion development. For others such as the UK, programmes have been initiated and there are companies vying to produce products to meet anticipated needs. However, the future of the British Army’s funding is far from clear, despite the geopolitical realities that the force is currently facing.

Looking ahead

At a theoretical level, the advantages of sensor fusion are relatively clear. It enables a force to generate targets quickly and potentially with a more complete understanding of the battlefield. This should lead to better prioritisation of targets – a force cannot hit everything it is presented with at once – which will be beneficial in attacking an enemy’s network at an operational level. However, human elements should always be thought of alongside the excited talk of sensor fusion for land-based ISTAR. Few in the West thought that Russian troops would continue fighting in Ukraine after suffering such heavy losses. Many over-estimated the capabilities of Western weapons in the close and deep fight. Nobody could accurately predict the amount of resistance Hamas would generate, despite being faced with completely overwhelming firepower and superior ISTAR resources. Suffice to say, sensor fusion provides the means to engage an opponent more efficiently, but a lot more needs to happen in a battlespace to translate this into victory.

Sam Cranny-Evans

 

 

[1] https://sgp.fas.org/crs/weapons/IF11654.pdf

[2] https://publications.parliament.uk/pa/cm200910/cmselect/cmdfence/225/225.pdf

[3] https://www.aerotechnews.com/blog/2021/07/16/icbs-g-ator-flight-test-successfully-demonstrates-joint-engagement-in-electronic-attack-environment/

[4] https://www.rusi.org/explore-our-research/publications/commentary/russian-military-objectives-and-capacity-ukraine-through-2024

[5] https://www.independent.co.uk/news/world/europe/ukraine-russia-war-us-uk-troops-b2467652.html

[6] https://ecfr.eu/article/drones-in-ukraine-and-beyond-everything-you-need-to-know/

[7] https://nps.edu/web/ccs/helmand

[8] https://www.raf.mod.uk/aircraft/shadow-r1/

[9] https://academic-accelerator.com/encyclopedia/elbit-hermes-450#:~:text=%3D%20Former%20operators%20%3D,-UK%20The%20British&text=In%20September%202013%2C%20the%20Hermes,supporting%20British%20forces%20in%20Afghanistan.

[10] https://api.army.mil/e2/c/downloads/2023/01/31/e1c75467/21-616-thoughts-on-pc20-project-convergence-history-way-forward-feb-21-public.pdf

[11] https://militaryembedded.com/radar-ew/sensors/how-rainmaker-prometheus-firestorm-and-shot-ai-algorithms-enable-the-kill-web

[12] https://www.c4isrnet.com/artificial-intelligence/2020/09/25/the-army-just-conducted-a-massive-test-of-its-battlefield-artificial-intelligence-in-the-desert/

[13] https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st

[14] https://defence.nridigital.com/global_defence_technology_may21/rafael_fire_weaver_battlefield_ai

[15] https://www.roke.co.uk/news/digitising-land-tactical-istar