Print Friendly, PDF & Email

Combat systems add more layers to battlespace perception when they rely on sensors, which expand situational awareness, electronics systems, networking, and cyber capabilities. They also help to process, disseminate and exploit information to overcome the hidden enemy from a distance in order to deny them the element of surprise.

Sensor Fusion in the Changing Battlespace

Warfare has changed in the 21st century. For centuries, military forces have confronted each other on the battlefield, where sheer military might, size, troop numbers, and firepower have determined the outcome of the battle. The few examples where great leaders have won battles manoeuvring their small forces to surprise and overwhelm the enemy were the exception. The development and deployment of small anti-tank and anti-aircraft weapons, and also missiles able to achieve pinpoint strikes at long ranges, and the military and terrorist use of urban areas, have brought about a significant change in modern warfare – the ’empty battlespace.’

It is where the invisible enemy is hiding undercover, disguised as civilians, concealed underground or in urban dwellings, snipers, and anti-tank missile squads. Most of these are hardly seen by the human eye or optical sensors. They can launch coordinated attacks in multiple dimension strike vehicles from below by operating remotely operated improvised explosive devices or mines. Long-range missiles or loitering weapons or weaponised drones also strike from above. A conventional force facing such a hidden enemy is likely to be surprised, suffer significant losses and risk decimating its power before its units can reorganise and respond to the threat.

Locate the Enemy

Awareness of the threat requires soldiers to be able to detect and locate the hidden enemy. In the past, combat systems were characterised by their firepower, mobility, and survivability. Today, combat systems add more layers to their battlespace perception – sensors that expand situational awareness, electronics systems, networking, and cyber capabilities, to process, disseminate and exploit the information to overcome the hidden enemy from a distance, thus denying them the initial surprise. This advantage helps defeat the enemy with direct and indirect fires, employing manned-unmanned teaming to minimise warfighters’ exposure to the threat.

The new concept offers an advantage over peer enemies, as it allows for a decisive engagement from long-range, delivering more accurate and lethal attacks. The land forces, especially the tactical elements, need real-time capabilities to locate those low signature threats.

Traditionally, combat vehicles have used optical and imaging sensors that assisted the crew in scanning the scene and detecting targets. However, against a hidden enemy, these means are no longer helpful. Multi-spectral sensors are required to see such low-signature targets from a distance. Operating in different wavebands, visual, thermal, acoustic, and millimetre-wave can spot an enemy presence and activity signatures and locate and engage those targets from a stand-off range.

Sensors for Combat Systems

Most sensors used on combat vehicles employ passive imaging sensors – today, different sensors use bespoke displays for panoramic cameras, gunner and commander sights, driver vision systems, and optronic sensors mounted on weapon stations. These sensors can be connected to the crewmembers’ displays that show the most relevant picture to address the user’s interest. Moreover, by employing machine learning and artificial intelligence analysis on the live sensor feeds, the automated system constantly analyses the situation in search of signatures and patterns to spot and alert targets all around, even in sectors uncovered by the crew.

Radars are other sensors that can be employed in modern combat systems. The radar provides an active sensor, detecting missiles, drones, and other threats directed at the vehicle. As an active sensor, radar may be used in silent mode, be maintained in a standby mode, and triggered by other sensors, such as a laser-warning sensor or electro-optical fire locator. Tracking a moving target, the radar can plot the ballistic trajectory of the target, predict the impact point, and calculate the point of origin to enable a quick counterstrike against the launcher. Other sensors use acoustic sensors designed to detect and locate hostile fire sources, such as sniper, gun, or rocket launches. Such information can be correlated with other sensors to refine alerts and situational pictures.

Other sensors that may be employed are signals intelligence (SIGINT), detecting human activity in the radiofrequency (RF) spectrum. Such signals include cellular or radio electronic signatures depicting enemy activity. Tactical SIGINT sensors may geolocate such signals and assist in the target location, early warning, and situational awareness.

Connecting the Edges

Situational awareness cannot be developed based on sensors alone, as excessive information clutters the view and disrupts the crew’s situational perception. Combining multiple sensors feeds into a single situational map. The user interface should be simplified and decluttered, presenting the user the clearest, most relevant information for decision-making and response.

Sensor fusion is complex, as it requires significant computing power and high-speed connectivity to transfer and process large files in real-time. With multiple sensors mounted on combat vehicles, it is an even more complicated task, with Space, Weight, and Power (SWaP) constraints.

Connecting all onboard sensors to a central processing unit is a complex and costly business, mainly when the sensor delivers raw images to the processor, which requires effective high bandwidth local network connectivity. To reduce the bandwidth needed for data transfer, sensors employ edge processing, with a local processor coupled with the sensor to perform essential image processing and compression functions that significantly reduce the volume of data transferred to the vehicle’s sensor fusion processor, where more complex algorithms are employed, along with AI/ML computing.

Open Systems Standards

Systems designed with the Modular Open System Approach (MOSA) to weapon systems set the standard to rapidly share such information from machine to machine, both at the edge and centre. Such systems provide the rugged, physical hardware, power, cooling, and RF connectivity, with standards-based slots ready to receive electronic commercial off-the-shelf (COTS) subsystems driving software-based mission-specific functions. As COTS hardware, these cost-effective building blocks provide the interoperability and flexibility needed to deploy systems across land, air, and sea platforms. On the sensor side, the Sensor Open System Architecture (SOSA) provides the standard framework for transitioning sensor systems to an open systems architecture. While SOSA is a voluntary recommendation assumed by more than 50 consortium members, it gives users flexibility in selecting and acquiring sensors and subsystems for sensor data collection, processing, exploitation, communication, and related functions over the complete life cycle of the systems.

While bespoke solutions often provide higher performance than COTS-based systems, adhering to COTS and open standards enables more frequent upgrading and enhancements of electronic and electro-optic systems, adding more advanced capabilities over time at a much faster pace and lower cost. For example, machine learning applications take signal processing and threat identification capabilities to new levels of speed and intelligence. This type of application compares actual world data from sensors to millions of examples it has been trained to identify. The application uses the result of the comparisons to make decisions, take actions, and provide warfighters with the insight they would not otherwise have.

Importance of Data

The advanced software in a machine learning application relies on complex algorithms that can quickly process large volumes of data. To execute that advanced software depends on the speed and processing power provided by specialised hardware that likely isn’t available on the platform today. When all system components follow MOSA, the processor card driving the traditional image processor application can be replaced with a more sophisticated card that drives a machine learning application. With updated software and a simple card swap, warfighters can access essential new threat identification capabilities with minimal disruption and no increase in SWaP.

While machine learning is a specific example of the need for speed in the field, the requirement to more quickly process ever-increasing volumes of data from more sources applies to almost every application in use today. With the ability to almost instantly upgrade any card type that performs any function, in any system, with a faster, more powerful replacement, warfighters have new opportunities to stay ahead of threats and increase their tactical advantage.

Such upgrades may use Field Programmable Gate Array (FPGA) cards optimised for:

  • radar
  • electronic warfare
  • signal intelligence
  • radar warning receivers
  • software-defined radio applications

Image processing may use General Purpose Graphics Processing Units (GPGPUs) that bring TFLOPS of processing power to Electro-optical/infrared (EO/IR) applications. These GPGPUs are essential to capture and manipulate substantial data streams from gigapixel cameras. Software-defined radio modules and Position, Navigation, and Timing (PNT) timing cards also eliminate the need for disparate PNT services for each system. Having a CMOSS-aligned timing card embedded in the central processor enables all connected systems to be interoperable with each other, ensuring a consistent and standardised positioning and timing reference. It is possible to significantly reduce SWaP-C by replacing large, individual radio boxes with a small card requiring only one chassis slot. NATO vetronics standards follow the Generic Vehicle Architecture (GVA) and NATO GVA (STANAG 4754) standards.

Saab offers several Rugged series systems designed to comply with the GVA and NGVA standards. The Rugged Vehicle Computer (RVC) is a high-performance vehicle computer unit based on the 7th Generation Single board computer platform designed to comply with contemporary vetronics standards. Another computer, the Rugged Vehicle Computer-Embedded, RVC-E, is a fully rugged, super compact, high-performance computer unit developed for powerful applications driven by an NVIDIA Jetson TX2-platform. The RDC10 display packs an Intel Core i7 CPU platform, several external interface ports, and customised connector housing/ connector configuration. Both are designed to meet DEF STAN 23- 09 GVA and STANAG 4754 NGVA.

Next-Generation Combat Vehicle Vetronics

Vetronics applications can be implemented during vehicle upgrades but are often implemented to a larger scale in new vehicle designs. One such example is Elbit Systems Torch-X Mounted, designed to comply with Generic Vehicle Architecture (GVA) vetronic standards. The system provides a centralised user interface that combines all platform sensors and effectors, providing the commander and crew with a single interface for unified and integrated situation awareness based on all the vehicle’s and combat team’s sensors and effectors. The system’s AI-based decision support optimises the use of sensors and effectors.

Another example of future programmes is the Franco-German Main Ground Combat System (MGCS), replacing Germany’s LEOPARD II and France’s LECLERC Main Battle Tanks (MBT). MGCS is driving the requirements that would implement AI-empowered sensor fusion. MGCS is not seeking to replace the MBTs one-to-one as was the case with new tank programmes, but rather will employ a system-of-systems approach that incorporates manned and unmanned ground vehicles and unmanned aerial systems with advanced automatic targeting and self-protection capabilities. Connectivity between the vehicles, sharing situational awareness, and targeting requires fusion and AI not only on board each vehicle, but also extend across sensors and vehicles through a high-speed broadband network.

German electronics system house HENSOLDT is one of the solution providers positioned to deliver such an integrated sensor fusion network. The company already provides the see-through armour system (SETAS) that offers a 360-degree view for the crew, even in areas the users are not looking at. SETAS can integrate and communicate with other systems; for example, sharing geo data-based objects to a neighbouring vehicle; this data can be collected from different battlefield areas and fused into a multi-dimensional situational picture. Onboard processing makes this possible because it is carried out close to the sensor and does not require data transfer elsewhere.

Further Sensor Fusion Examples

Elta Systems’ Athena AI core represents an innovative approach for optionally manned fighting vehicles. The system was implemented on the CARMEL technology demonstrator. Athena autonomously combines all data received from external and internal sources, including EO, radar, and SIGINT. Then, it analyses and classifies targets, prioritises the response, and displays the situational picture to the crew for assessment and decision-making. Athena also provides recommendations for action and follows those decisions by closing the operative loops with the appropriate effectors on and off-board.

AI will ultimately have the most significant impact in instances where it is emulating a human’s abilities rather than just those of the human brain – that is, where it can assess information in the same way a human can. AI’s ability to conduct sensor fusion and track correlation – drawing on a wide range of inputs and far quicker than human operators – will bring a step-change in capabilities.