Print Friendly, PDF & Email

The digital age of technology is gaining momentum and the words `future’ and `autonomy’ are currently prolific buzzwords that appear in most topics regarding defence technology. However, they should not be dismissed and their effects will be powerful.

Whilst some see future autonomy as self-driving cars, uncrewed vessels and planes, developments with autonomy are moving at an exceptional pace with the ultimate goal for systems to accomplish goals independently (or at least with minimal human supervision). Autonomous systems will be given a mission and be able to conduct that mission whilst adapting to ever changing and varied operational conditions. Autonomy is a critical enabler.

Uncrewed Systems

Uncrewed systems have been in use for well over a decade and defence forces will see higher degrees of autonomy made possible through the use of Artificial Intelligence (AI) and cognitive computing. Human workload and cognitive impact will be dramatically reduced, in turn freeing up soldiers to carry out important decision-making tasks. Support tools are being created to allow Real-time Execution Decision Support (REDS) so that commanders are assisted in making complex decisions to find solutions at greater speeds. General Sir Nick Carter Chief of the Defence Staff, former UK Chief of Defence Staff notes that there is a clear operational advantage, “Mass is no longer the asset it once was — it is all about effect.”

Reducing the density of humans on the battlefield will not be a choice decision as militaries continue in a steady decline of recruitment couples with a lessening of physical resources.

Robotics and autonomous systems (RAS) will not just be operational for tasks that are considered dangerous or at risk for their human counterparts. Automating monotonous and time-consuming tasks will emancipate humans to concentrate on tasks that require cerebral input. These repetitive tasks can now be carried out by algorithms and AI at exceptional speeds. Project Mavern (also known as the Algorithmic Warfare Cross-Functional Team (AWCFT)) was one example in 2017 where the United States Department of Defense (DoD) used AI technology (computer vision algorithms) to analyse and tag images captured by surveillance aircraft and reconnaissance satellites.

Autonomous Platforms

Autonomous platforms – such as Uncrewed Aircraft Systems (UAS), Uncrewed Ground Vehicles (UGV) and Uncrewed Maritime Vehicles (UMV) – need to be able to make sense of the world around them, whilst perceiving the threat, analysing the scenario, communicating back to a commander/operator, plan its next steps, make its own decisions (on which weapon to use), and act (move away from danger). Essentially, be able to make sense of predictive and non-predictive environments. This shift from tool to machine teammate has already begun and will transform the battlespace. In the short-term, human-robot interaction (HRI) will be a key focus as we begin to understand and interpret fast-paced systems and associated actions. The degree and quantity of human involvement will require robustness, coordination and increased interoperability.

Human-Robot Interaction

Growing digital skills with defence personnel will be absolutely critical in order to build trust, provide safety assurance and integrate data technologies. Human-Machine Teaming (HMT) ensures that digitally skilled personnel have the opportunity to enable the data technology on systems however there will be an initial significant user burden but as autonomy improves, there will be less human interaction.
Without humans making the decisions, autonomous systems require tasks to be distributed amongst themselves. These multi-agent systems (MAS) will autonomously re-task based on situational awareness. Negotiations or ‘bids’ are used by the agents to decide which tasks should be allocated to which agent and which tasks should be acted upon next. Stephen Bornstein, CEO, Cyborg Dynamics spoke to EDS on the likely path ahead, “AI will support navigation, communications bearer handling, target acquisition and acoustic detection in its first instance. It will then grow to support the coordination and collaboration of multiple robots simultaneously.”

Swarms

By linking platforms, sensors or systems together and connecting the battlefield, forces are strengthened. The effectiveness of formations can be increased with multiple uncrewed systems. Zachary Kallenborn, Policy Fellow at the Schar School of Policy and Government, George Mason University, Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), spoke to EDS regarding the benefits of swarms, “Swarming UAVs enable the use of mass – attackers can keep throwing drones against a defender until they are overwhelmed. That may not take much, because current counter UAS are not well-designed, if at all, for handling multiple incoming drones. Drone swarms also create complexity on the battlefield, because they are so well suited for combined arms tactics. Attack drones within the swarm can be equipped with a range of payloads from bombs to anti-tank and electronic warfare systems. Swarms also enable the creation of modular, distributed sensor networks to search over large areas for desired targets, relaying that information to other manned or unmanned assets or collective battle intelligence networks.”

Interconnectivity

Interconnectivity is key and there has been a great leap in the development of communications technology, networks, computing, processing, and the miniaturisation of sensors. All platforms, systems, sensors and effectors need to be able to communicate with each other to be linked to provide a stable, digital backbone. A contested electromagnetic spectrum (EM) is a fundamental issue for defence, with radars and sensors unable to locate each other. Communications are essential for effective command and control. Passive techniques and adaptive machine learning (ML) technologies need to be employed to provide countermeasures against disruptive, high-precision, electromagnetic strikes from adversaries’ systems such as high-energy lasers. ML is perfectly suited for electronic intelligence (ELINT), identifying patterns such as flows of signals and recognising those that are unknown.

Loitering Munitions

The US Army’s Rapid Capabilities and Critical Technologies Office (RCCTO) is seeking loitering munitions and swarm technology. Loitering munitions are likely to forge ahead due to being low cost, easy to deploy and able to provide air support to dismounted troops when larger UAS are unavailable. Their small size makes them harder to detect and hit however, due to their small size, power limitations reduce their flight time. The Nagorno-Karabakh war saw loitering munitions used by both sides however Azerbaijan had the edge due to Armenia’s anti-air defences. These expendable suicide drones are increasingly being used by non-state actors with the risk of disastrous consequences.
Different scenarios on the battlefield will require varying levels of autonomy which will slowly shape future warfare. It will make improvements but there will be many hurdles along the way with each domain experiencing its own challenges whether that be from cyber-attacks, underwater communications, GPS-denied environments or beyond visual line of sight (BVLOS). As well as their own individual challenges, joining all three for joint operations (whether it be with allies, all domains or joint forces) is problematic.
Before autonomous systems can make a real impact, solutions need to be found to address connectivity, interoperability and bandwidth issues to provide an interconnected battlespace. Streamlined command, control and communications (C3) will be a key enabler on the future battlefield. An open digital architecture with the standardisation of networks and data will support the military’s efforts towards multidomain integration and transformation.

AI and Algorithms

That said, the AI and algorithms used can only be as good as the data that they are based on. As we move further into the digital age, vast amounts of data will be produced with a heavy emphasis on analysing and cleansing. Intelligence preparation of operational environments such as cyberspace, threat assessments or specific characteristics (terrain, demographics, weather) require a high level of accuracy in order to operate autonomous systems and AI-enabled assets efficiently.

As with all technology, adversaries will try to exploit it. Backdoor attacks can be triggered through malicious model training and activated when the AI enters production. Some of the techniques used are designed to cause malfunction with data being arranged in a way as to confuse or exploit vulnerabilities with the system. An example of this misclassification or deceptive input came to light when researchers at the MIT fooled deep learning (DL) algorithms that a turtle was classified as a rifle. When just a few pixels on an image are altered, such as tape over a stop sign, it confuses the software.

Algorithms are now trained to detect these adversary techniques however it is a slow process. Stochastic defence introduces randomness to the behaviour of neural networks which means the attacker cannot affect every model. Adding layers and switching to different blocks within those layers increases accuracy. However, it is not only developing accuracy that leads to better AI, it is the trust with its decision-making abilities. If operators or commanders are not able to trust the system, then they simply won’t use it.

In order to gain trust, greater transparency is required with regards to accountability. There can be huge implications of not understanding or knowing the path (code) which a machine took to make a decision. These black box problems examples of where robust surrounding policies, safety precautions and frameworks will need to be tightened.
Assured autonomy will play a big part in the global reach for domination. Most countries have now released an AI strategy document, realising that AI dominance is key for economic and strategic development, including military control. China is already flexing its muscles and demonstrating that it is an AI superpower. In 2017 China projected in its AI strategy document that it will “become a global AI innovation centre” by 2030. The document outlines that as well as developing the technology, cultivating talent is also high on the agenda. Russia is also looking to increase its levels of expertise with AI. On 10 December 2021, the Russian Ministry of Education stated that an All-Russian Olympiad for schoolchildren on AI will be held annually.

Samuel Bendett, Analyst at the Center for Naval Analysis and adjunct senior fellow at the Center for New American Security spoke to ED&S, “I think that at this point, Russia’s major AI and autonomy research and developments track closely with major global trends – the Russian MoD is seeking swarm and group applications, manned-unmanned teaming arrangements, integration of military autonomy (especially UAVs) into the same operating space as piloted and crewed systems, investments in AI as a C2 and decision-making tool for military robotics and for regular formations, and the overall integration of different types autonomous systems into a common combat operational and information environment. We see this in tests and trials of systems like MARKER, ORION-E, URAN-6 and URAN-9, in exercises like Zapad-2021 and in technologies and concepts Russian developers bring to major international defense expos like Dubai Air Show.”

Bendett believes that AI will play an increasingly important role in the future as a key element of current and future Russian combat systems, especially when it comes to military autonomy. “All of the above will be accompanied by extensive testing and evaluation – in Russia during drills and exercises, and potentially in Syria in actual combat, to really stress-test how such systems perform.” adds Bendett.

The United States DoD is also pushing the boundaries in AI and its capabilities with billions of dollars being invested over the next five years. Research efforts are on the rise worldwide. Richard Moore, Chief, Secret Intelligent Service (MI6) stated, “Our adversaries are pouring money and ambition into mastering artificial intelligence, quantum computing and synthetic biology because they know… this will give them leverage”.

As autonomy progresses, a new type of force will emerge. As we soar through the digital age, software and cyber technology will be high on the agenda with forces reaching for that ultimate asynchronous edge against their adversaries.