Print Friendly, PDF & Email

With warfare becoming more complex, sophisticated and lethal than ever before,
armed forces are considering adopting surrogates for missions deemed suitable for machines.

These could be tasks that are too dangerous for humans, too difficult (beyond their physical capabilities), or too repetitive or otherwise dull for a normal person to pursue. These three Ds (danger, difficulty and dullness) defined the first generation of military robots, which practically replaced the human with a machine that performed the same function.

Typical examples include unmanned aircraft flying over enemy territory to collect intelligence, or robots assisting experts in handling unexploded bombs and Improvised Explosive Devices (IEDs), on Explosive Ordnance Disposal missions. Meanwhile, compact robotic tractors support infantry squads by carrying their loads, and remotely controlled vehicles patrol borders, maintaining vigilance over long and potentially dangerous roads, thereby reducing the risk of casualties on hot borders. While these systems use varying degrees of automation for their tasks, their operation often requires multiple operators and technicians to perform supervised or otherwise controlled or managed operations.

From Automation to Autonomy

Autonomy refers to a level of independence devolved to a machine to accomplish specific tasks. Autonomy may imply periods of complete independence or, for the more complex parts of a mission, a level of operator supervision. Both modes provide new opportunities, as they enable machines to undertake more tasks with less support and personnel.

Tele-operated remotely controlled and robotic systems are becoming more autonomous as they become ‘smarter’, enabling the machine to take more ‘responsibility’ through the mission phases. Some of those activities, such as the take-off and landing of drones, improve safety, as the system eliminates the human error that often causes mishaps. On land, trucks following each other in a convoy may operate autonomously, or be supervised from a drone or from one of the vehicles. Without drivers, such convoys can be smaller and faster, and require minimal escort, offering many advantages for logisticians.

Autonomy does not always mean that the system operates completely by itself. Rather, it implies the facilitation of specific tasks within a mission and ensures the relevant platform can continue the mission and return safely to its base even when the control link is lost.

Manned–unmanned combat teams might dominate the battlefield of the future. (Photo: via author)

Some autonomous systems empower the mission controller, by delegating specific tasks the machine can do better, enabling the human controller to focus on the more complex, high-level tasks. For example, autonomous drones can exploit Artificial Intelligence and Machine Learning (AI/ML) to filter the multitude of data from their various sensors, providing their operators with only the most relevant information.

Manned fighter planes also become ‘autonomous’, in respect to sensors and data processing, as pilots do not monitor individual sensor feeds, with AI/ML acting as a filter, providing the most useful data to the pilot. With machines diving into endless volumes of information, they are not limited to processing information in real time, but employ ‘Space-Time Adaptive Processing’ techniques combining real-time sensing with historical data, determining anomalies or patterns unpredictable by normal processing. These methods empower the latest approaches in countermeasures and electronic attack.

Adversary Autonomy and Counter Autonomy

The development of AI has increasingly become a national security concern with all leading military powers racing to implement AI, specifically in military applications and intelligent weapons. “Whoever becomes the leader in this sphere will become the ruler of the world,” said Russian president Vladimir Putin. But it is China that has great ambitions in this area, planning to become the global leader in AI research by 2030. While AI and autonomous systems have big potential, they are also vulnerable to attack.

Vulnerable to deception, interference and easily located on the battlefield by the electromagnetic emissions they depend upon, mission-critical and military AI/ML systems should be designed to protect against such attacks. DARPA and the European Defence Agency (EDA) are both pursuing parallel studies exploring Protection Against Enemy Interference (EDA) and Guaranteeing AI Robustness Against Deception (GARD), both addressing adversarial attempts to alter, corrupt or deceive such systems. These studies are set to prevent the chaos that could ensue in the near future, when attack methodologies, now in their infancy, have matured to a more destructive level.

Autonomous lethal weapons are already operational with the ‘loitering weapons’ category, combining the long endurance, reconnaissance of a drone with the guided attack capabilities of a missile. Designed as an affordable and disposable asset, LM leverages autonomy as a key for effective use. Unlike the larger drones, loitering weapons are launched from a forward location, sometimes by the combat elements themselves, and perform missions against targets suspected to be in a specific area. Hence, users need to respond quickly and intuitively, focus on the mission rather than flying the drone. To meet these requirements, the loitering weapon supports various functions enabling autonomous operation on certain mission phases, switching between operating modes by user command. For example, take-off and flight to a designated area are done automatically, while entering attack mode or aborting an attack are done by user command. A high level of autonomy and robustness, particularly in GPS-denied locations, enables the loitering weapon to avert soft-kill means employed by enemy counter-UAS techniques, although ‘hard kill’ remains as effective as any other air defence weapon.

Designing a Trustworthy Machine

Traditionally, tasks that utilise autonomy are based on procedural processes, resulting in robotic systems that are prone to decision errors. Therefore, humans still do not trust machines for critical decisions. The next level of autonomy will utilise ‘Learning Enabled Components’, sensors, processors and algorithms coupled with neural networks and artificial intelligence enabling a ‘learning process’, as Learning-Enabled, Cyber Physical Systems (LE-CPS) will gradually adapt to the unstructured environment they operate in.

The ATHENA artificial intelligence server designed by IAI (Photo: Tamir Eshel)

DARPA is exploring learning machines under the Assured Autonomy programme. Instead of relying on human supervision for problem solving, such machines will be able to explain their recommendation, and learn from the supervisor’s response. Such intelligent machines may also inform the user about their ‘competency’ to perform a specific task under certain conditions (weather, visibility, degraded mobility, and so on). Portraying the ‘probability of success’ to the user enables the human to assess the robot’s ability to perform the task and decide whether to rely on the machine or on other means to perform the task. DARPA is pursuing these functions under the Competency-Aware Machine Learning (CAML) programme.

Some autonomous systems are already calling the shots in situations where the human is too slow to respond. Active Protection Systems – employing ‘hard kill’ against incoming rockets and missiles – employ sensors, protectors, and countermeasures to defeat such threats in fractions of seconds. Systems like the StrikeShield from Rheinmetall are set to engage the incoming threat in a fully autonomous manner. Other systems also add important situational awareness information. Rafael’s TROPHY, for example, points the countermeasures at the incoming threat, and also ‘suggests’ to the commander to engage the source of fire – either using the vehicle on which the system is mounted or with other elements in the combat formation. Unlike the automatic employment of ‘hard kill’, it leaves the decision to the commander to aim the gun at the target and open fire, since the commander may consider other tasks are more important at that time.

Autonomy and Man-Machine Teaming

Used by Special Weapons and Tactics (SWAT) law enforcement agents and counter-terror operatives, DOGO is a smart robot that acts as a means of surveillance in urban enclosures. DOGO is sent ahead of the agents to scout into a fatal funnel, to provide situational awareness and remote engagement capability, thus decreasing the risk to the entire team. Until DOGO, the only way to achieve this task was to send a team member into a potentially deadly situation, risking them suffering enemy and friendly fire. As the agent, DOGO also carries a weapon – an automatic handgun that can be aimed and operated by remote control.

The DRAGONFLY loitering weapon (Photo: via author)

The robot combines eight video cameras into a live panoramic view of the scene. The scene displayed on the RANGER remote control unit provides an intuitive ‘point and move’ or ‘point and shoot’ capability, with which the user can point the robot, move to a required location, aim and shoot at a selected target with high precision.

General Robotics, the developer of the DOGO tactical robot also employs advanced algorithms to enable counter-terror operatives to optimise their close quarters combat (CQB) capabilities while minimising the risk to agents. In 2019, the company introduced the DOGO 2, enabling faster and efficient target engagement in unfamiliar settings.
Autonomy is making strides supporting dismounted combat formations with versatile platforms that are field-configurable to meet the needs of the warfighter. Rheinmetall’s MISSION MASTER all-terrain robotic vehicle is designed with this capability in mind.

MISSION MASTER can be deployed in places that are difficult to access, whether operating autonomously or serving in ‘mule’ mode, following its human operator. Fitted with a mast-mounted sensor pack it assumes a reconnaissance role, and on other missions it is loaded with a remotely operated weapon station or rocket launcher, to function as a mobile fire support platform. With the first unit delivered in 2019, Qatar became the first user of the MISSION MASTER robot.

The MISSION MASTER platform is based on an electrically powered 8×8 rover controlled by an ‘Advanced Robotic Intelligence System’ (ARIS) developed by Provectus Robotics Solutions. ARIS combines feeds from on-board sensors with semi-autonomous and autonomous control algorithms delivering flexible operating modes. For example, the system has implemented deep learning technology enabling the robot to recognise and follow a person, enabling ‘leader–follower’ function and improving operation with humans.
ARIS is NATO STANAG 4586 compliant and designed as a platform agnostic framework, handling mapping, planning and autonomous handling functions. The system also includes perception and positioning subsystems, with chassis and payload control interfaces enabling efficient integration with different platforms and payloads.

These essential capabilities led Rheinmetall Canada Inc., the subsidiary of the Düsseldorf-based Rheinmetall Group to secure a 100% holding in Provectus, ensuring the know-how and support for its growing unmanned ground vehicle activity.

In the near future, modern combat vehicles will be able to operate ‘unmanned’ or at least remotely controlled, with the introduction of ‘drive-by-wire’ capabilities as part of the vehicle architecture. With autonomous control systems like IAI’s ATHENA, such vehicles will be able to operate as unmanned or autonomous platforms.

Some vehicles already have a certain level of independence. The CAESAR truck-mounted self-propelled howitzer from NEXTER utilises an optional self-loading capability, enabling the crew to operate the vehicle by remote from a distance. Patria designed its AMV 8×8 armoured vehicle with ‘drive-by-wire’ technology that will enable driverless operation in the future. And the Optionally Manned Combat Vehicle, part of the US Army Next Generation Combat Vehicle (NGCV) programme will have this capability.

Leveraging this unmanned capability in a manned–unmanned teaming environment, the US Army plans to test optionally manned fighting vehicles and ground robotic vehicles in a platoon, performing missions currently done by a company, or even a battalion. The robotic platoon will be able to dominate up to 75 square kilometres, leveraging the capabilities offered by advanced manned–unmanned teaming. Army robotics experts consider that by thinly spreading combat elements across the area, and deploying numerous communications nodes to establish a secure and independent MESH network, unlike a cellular or combat net radio network, such a combat formation will be ‘hidden’ within an ‘always on’ network, that will not represent a clear hierarchical structure to the adversary.

Teaching Sensors to Look for Targets

Employing certain signal processing algorithms, sensors can be ‘taught’ to perform specific tasks. With the increasing computing power available for signal processing, the demands on sensors and systems for complex processing are increasing. This includes, for example, recognising shapes, patterns or movements from different angles. Such ‘Machine Learning’ processes teach systems to recognise certain objects, and thereby present the user with a ‘suspicious object’ or an ‘alert’ rather than a picture. An example is ThirdEye’s passive drone detection system MEDUSA, which uses infrared cameras to detect drones over long distances at day and night. Utilising target libraries the system is trained to recognise, these machine learning systems employ compact circuitry to enable detection of different targets against complex backgrounds, below the horizon and in a cluttered scene.

Smart Vision for Intelligent Vehicles

The HERO-400EC loitering weapon (Photo: via author)

With the advent of AI/ML, platform-mounted camera systems have become smarter, with their new responsibility to process video streams for object recognition or target tracking. Having multiple sensors delivering such situational understanding on a vehicle takes a different AI/ML processing approach. The Canadian company Pleora Technologies has introduced a ‘plug-in’ AI solution as part of its RuggedConnect smart video switch, that couples multiple video sources on a vehicle over Gigabit-Ethernet AI/ML processing. The system leverages a commercially available processor that adds decision support capabilities to the platform.

For example, with driver assistance services, the system spots changing soil conditions, gradients and obstacles. These functions are supported by a library developed by Mission Control Space Services of Canada. This Vehicle/Terrain AI Safety System provides real-time terrain data, even in very rough, low-visibility, and changing environments, to help increase safety and intelligence while lowering costs for defence departments. Another library developed by Lemay.ai provides threat detection, identification and classification functions for the gunner and commander.

Autonomous Weapon Control

By the mid-2020s, as the US Army is expected to field the first ‘Optionally Manned Fighting Vehicles’ (OMFVs). Even in manned operation the vehicles’ sensors and weapons will employ an ‘autonomous turret’ concept. Such a turret will also be used on the ‘Robotic Combat Vehicle’, a smaller robotic platform that will be able to operate with the OMFV as part of the manned–unmanned team.

Both systems will benefit from the Advanced Targeting and Lethality Automated System (ATLAS) the Army has asked industry to explore. Fusing feeds from multiple multi-spectral sensors and sensing disciplines (visible, NIR, SWIR, MWIR, LWIR, LADAR, LIDAR), ATLAS will be able to develop situational awareness, detect, classify, recognise and identify different targets and set to engage those targets under the most relevant rules of engagement.

The Israeli Army has already demonstrated such capability on the CARMEL future combat vehicle technology demonstrator. The vehicle employs multi-intelligence sensor fusion, associated with centrally processed AI/ML to provide the crew with clear representation of threats and targets, enabling the crew to focus on the most critical targets, ensuring the vehicle’s survivability and mission success.

The first phase of the CARMEL programme tested a revolutionary new concept proving such a combat vehicle can be operated effectively by two persons under closed hatches. The test results clearly demonstrated not only that it can be done, but that given a high level of situational awareness and understanding, a crew of two under armour is far more effective, even in the most complex situations. A key for the efficient performance was the delegation of missions to ‘virtual crewmembers’ that assume certain tasks. These include driving or situational awareness of specific sectors, while the human operators take charge of the more complex tasks and decisions, as well as supervising the operation of lethal effects. The first phase examined three different designs from Israel’s leading defence companies. Despite the high level of automation and AI, all concepts reiterated that human dialogue based on a common display in the cockpit is essential for efficient teamwork.

The next phase – CARMEL 2.0 – will test an armoured vehicle that will implement the sensor-rich two-man crew concept with additional capabilities, performing a mission for a tank platoon using a single manned vehicle with two robotic companions.

A Moral Responsibility?

As intelligent machines become increasingly involved in combat support, command and control, Lethal Autonomous Weapon Systems (LAWS) are becoming more realistic and controversial. Advocates and critics of autonomous weaponry disagree over whether such systems can be equipped with algorithms sufficiently adept to distinguish between targets to satisfy the laws of war. So far, the United Nations have failed to bring an agreement on regulating or limiting some of these weapon categories. In the absence of an organised campaign to highlight the moral and legal controversy of the new technology, it seems the military powers and industrial complex are set to race forward making LAWS a reality for the soldiers, but a nightmare to the advocates.

Tamir Eshel