The need to operate in increasingly complex and demanding operational environments threatens to overload the human-centred command chain on board a modern warship. Expanded use of rapidly developing AI techniques offers the potential to address this challenge, revolutionising decision-making in the command space. This article examines recent experimentation in this area, using the Royal Navy as an example.

Background

The combat information centre (CIC) serves as the focal point for picture compilation, mission management, and weapon control on board a modern warship. It hosts a team of human operators responsible for providing tacticians and commanders with rationalised information as the basis for real-time decisions. Members of the warfare team interact with computerised consoles, displays, communication devices, and other peripherals to build a collective appreciation of the tactical situation; evaluate and prioritise threats; and manage the ‘battle’ on, above, and below the sea surface.

An Electronics Technician tracks surface and air contacts aboard the guided-missile destroyer USS Paul Hamilton (DDG-60). Command teams increasingly face cognitive overload as threats grow in complexity, environments become more challenging, and data volumes grow. (Photo: US Navy)

Currently, the command chain in the CIC is based on a highly prescriptive and human-centred decision-making hierarchy, with compilers and operators building a tactical picture from a variety of organic and non-organic sources to enable timely and informed tactical decision-making: for example, courses to steer to open weapon arcs, or the execution of soft-kill countermeasure ploys. However, it is recognised that command teams are now at increasing threat of overload as naval forces are increasingly called upon to operate in ever more complex and demanding operational environments characterised by diverse and increasingly challenging threats.

At the same time, ships are in receipt of ever greater volumes of data from both organic sensors and non-organic sources, thus complicating the ability of command teams to identify, understand and react to the threat scenario. Operators are also subject to increasing strain: staring at a screen for several hours at a time on defence watches requires intensive human concentration, even with break periods. Any lapse could mean a contact is missed or misidentified.

It is against this backdrop that naval practitioners, operations staff, defence scientists, industry and academia have all begun to consider how increased automation and greater use of artificial intelligence (AI) techniques can improve the acuity and speed of decision making in the command and control space. Definitions vary, but in broad terms AI can be characterised as ‘intelligent behaviours’ displayed by machines. In essence, this describes the ability of machines to mimic the cognitive functions employed by humans for reasoning, planning, learning and problem solving tasks.

AI has already started to enter the mainstream in the commercial and consumer sectors as corporations have seen the potential of AI to improve productivity, increase efficiency, and simplify task execution. Navies are now also keen to harness the power of ‘machine-speed’ AI in command and decision-making, recognising that AI techniques are good at extrapolating patterns, trends and signals from noisy and dynamic data. At the same time, there is an understanding that integrating human operators and computers in an effective and efficient socio-technical organisation brings with it a myriad of technical, operational and ethical complexities.

The CIC of the Visby class corvette HSwMS Karlstad. Future AI-enabled command and control systems will require human and machine teaming to be considered as a fundamental part of the design process. (Photo: Richard Scott)

AI in Context

High-level automation is by no means new to naval warfare. For instance, self-defence weapon systems set to ‘auto’ mode will automatically engage when pre-determined engagement threshold conditions are met. This represents is a very rudimentary form of AI in so far as the weapon system has the ability to assume a function otherwise performed by a human. However, it should be made clear that this is not a learning system as it only functions to pre-programmed rule sets.

Initial thinking on the implementation of early forms of AI into the command environment goes back to the 1980s. Royal Navy (RN) ship losses in the South Atlantic, the anti-ship missile attack on the frigate USS Stark (FFG-31) in the Gulf, and the inadvertent shoot-down of an Iranian A300 airliner by the cruiser USS Vincennes (CG-49) all provided evidence as to the fragility and fallibility of action information organisations reliant on large and hierarchical human-centric command chains. In some situations, a combination of high workload and battle stress overwhelmed the cognitive capacities of operators, leading them to incorrectly assess a situation and/or miscalculate the appropriate response. In others, a lack of attention by operators and warfare officers meant threats were ignored even when there were clear clues that an attack was imminent.

By the 1990s, some limited attempts were made to introduce forms of AI into the command chain. However, these so-called ‘expert’ systems – implementing a form of AI based on a knowledge base containing embedded doctrine or rules – encountered a number of shortfalls and limitations. For example, constraints imposed by the computing throughput and accessible memory of the era necessarily limited the complexity of software implementation. Also, these knowledge-based techniques were very rigid in their implementation – relying on rules that had been distilled from operator experience – and so very narrow in their application.

Johns Hopkins University Applied Physics Laboratory’s hard kill/soft kill (HK/SK) Performance Assessment Tool (HaSPAT) prototype was deployed aboard the US Navy Aegis cruiser USS Bunker Hill (CG-52) in early 2020. (Photo: US Navy)

Renewed interest in the implementation of AI in naval command and control reflects the significant advances in technology and techniques over the last decade – most importantly, the revolution in deep learning that today enables computers to learn and generalise in a more human-like way on a specific task. At the same time, there is a better appreciation of where AI could add value in the command process: for example, by helping to alert operators to potential threats at an earlier stage, or to support threat evaluation and weapon assignment (TEWA) in complex multi-threat scenarios.

It should also be understood that, for the foreseeable future at least, the idea of completely replacing human beings by machines is not being countenanced. Rather, the focus is on the exploitation of AI techniques to cut down the workload of decision-makers, and thereby give humans more time and improved clarity when they plan missions, estimate adversary capabilities, or consider taking a particular course of action. In short, AI can deliver critical decision support when time is limited or when the number of choices is too large for humans to be able to analyse all alternatives.

One example of such a decision aid is the hard kill/soft kill (HK/SK) Performance Assessment Tool (HaSPAT) prototype developed by the Johns Hopkins University Applied Physics Laboratory (JHU APL). Designed to help operators understand the planned defensive posture and evaluate combat system performance before an enemy attack, HaSPAT also balances weapon inventory by advising what resources are available and ensuring adequate magazine capacity is retained for self-defence. Engineers from JHU APL developed HaSPAT after a June 2019 visit to the US Navy Aegis cruiser USS Bunker Hill (CG-52). After discussions with the ship’s commanding officer, the decision was made to undertake rapid development of a tool that could help the warfare team on board better plan and coordinate the use of hard-kill and soft-kill effectors.

Intelligent Ship Phase 2 saw a total of 10 intelligent agents funded and an ‘integrator’ selected to manage the development of the ISAIN environment. (Graphics: Dstl)

HaSPAT incorporates information about weapon effectiveness to support weapon assignment and scheduling, and embeds a simulation to produce analytics and performance metrics to inform the user of possible risks associated with configurations. It was also designed such that users could set up different force battlespace configurations for area and self-defence experiments.

The prototype tool was deployed on board USS Bunker Hill in early 2020 so that crew on board could evaluate HaSPAT functionality and offer feedback for further updates. According to JHU APL, this initial demonstration has provided a stepping stone to a much more significant hard-kill/soft-kill coordination capability at force level.

Across the Atlantic, prototype decision aids designed to accelerate and improve command team situation awareness and threat analysis in stressing above-water warfare scenarios have also been subject to operational experimentation at sea by the British RN. For example, a number of AI tools were evaluated by the RN and the Defence Science and Technology Laboratory (Dstl) during the At-Sea Demonstration/’Formidable Shield 21’ exercise in May 2021. One was Roke’s STARTLE application, which is designed to help ease the load on operators monitoring the air picture by providing real-time recommendations and alerts. Another was CGI UK’s System Coordinating Integrated Effect Assignment (SYCOIEA) automated platform and force TEWA application.

The Intelligent Ship

It is recognised that a central challenge going forward is how to engineer the interaction and teaming of human operators with computers and AI software programs so as to minimise the ‘friction’ between human intent and the execution of that intent using automated or autonomous systems. This integration – the seam of which is the human-computer interface – must recognise that humans are not just ‘users’ or ‘operators’ but are themselves part of the decision-making loop, and so integral to function and output.

This need to examine some of the key issues around the potential for AI to transform command decision-making was, in 2019, the catalyst for the UK’s Dstl to launch a multi-phase science and technology project (S&T) known as the ‘Intelligent Ship’. Funded by the Ministry of Defence (MoD) as part of its wider autonomy S&T programme, this ongoing effort represents a ground-breaking attempt to engineer a collaborative ‘system of systems’ in which automation and AI are more closely integrated and teamed with humans to enable more timely and better informed planning and decision-making. Importantly, the Intelligent Ship project set out to demonstrate a future command and control concept where humans and AI ‘agents’ were designed in at the outset, rather than simply have AI added in to a traditional action information organisation. Furthermore, it also recognised that the system of systems would include machine-machine teaming as well as human-machine teaming.

The Command Lab at Dstl’s Portsdown West facility served as a testbed environment for Intelligent Ship Phase 2 evaluations. (Photo: Dstl)

Phase 1 of the Intelligent Ship programme involved a series of ‘challenge’ themes – mission planning and decision aids, information fusion, sensor and information management, novel human-machine interfaces, human-machine teaming, and integration – representative of the various functions and capabilities found in a typical warship. These included components supporting platform systems, as well as command planning and decision aids.
A core part of this initial six-month phase was a task to develop an Intelligent Ship AI Network (ISAIN) framework. Developed under the leadership of CGI UK with support from DIEM Analytics, Human Factors Engineering Solutions and Decision Lab, ISAIN is an environment within which human-machine teaming can be explored in different scenarios, enabling the development and evaluation of new organisation and workflow structures that capitalise on the use of AIs working alongside humans. This offers the potential to dynamically shift workload between human, AI, or both, depending on the situation and its complexity. In addition, the ISAIN framework offers a proving ground for system of systems studies, and to promote research into innovative mechanisms that support and facilitate the activities and interactions of all members of the team (both human and AI).

For example, how different AIs and humans collaborate, the most appropriate mix of AI and human capability, the best ways to organise AIs and humans to achieve goals as a team, and the means to arbitrate or de-conflict contrary advice/actions from multiple AIs.
Alongside ISAIN, Phase 1 of the Intelligent Ship also funded the maturation of AIs – or Agents for Decision Making (ADeMs) – that could be integrated into ISAIN for the demonstrations. ADeM is a term adopted by the project to describe either a human, or a machine-based intelligence agent operating within a mixed human-AI machine or AI machine-AI machine team.

A call for Phase 2 of the Intelligent Ship project was issued through the MoD’s Defence and Security Accelerator (DASA) in June 2020. DASA funds innovative and potentially exploitable S&T ideas that could lead to a cost-effective advantage for UK armed forces and national security. A total of nine Phase 2 contracts – cumulatively valued at around GBP 3 M – were awarded in November that year. Approximately half of that figure went to CGI UK as ISAIN integrator and developmental lead. In this role, CGI UK partnered with Dstl for ISAIN integration, installation of ISAIN into a Command Lab established at Dstl’s Portsdown West site, design development for how the various aspects of the Intelligent Ship would come together within the ISAIN environment, and integration of selected ADeMs into the ISAIN architecture.

DASA committed the remainder of Phase 2 funding to the development of specific ‘trained’ AIs. Individual contracts were awarded to Decision Lab, DIEM Analytics, Frazer Nash Consultancy, Montvieux (which received two awards), Nottingham Trent University, Rolls Royce and SeeByte. CGI UK produced a software development kit, based on industry standards and tools, which was provided to the various ADeM suppliers.

Alongside the DASA contracts, the Tactical Navigation (TacNav) agent previously developed under Dstl’s ‘Progeny’ framework was pulled through into Intelligent Ship Phase 2. Developed by CGI UK, TacNav has been developed to plan, execute and monitor tactical navigation for the Intelligent Ship. Also featuring in Phase 2 was CGI’s SYCOIEA TEWA decision aid.

Because the project was unable to fund all the proposals arising from the DASA call, the decision was taken to select a broad spectrum of AI agents spanning a range of platform and combat system functions. For example, Rolls-Royce developed a decision-making control system known as ACE (Artificial Chief Engineer) designed to make condition-based decisions about how best to operate ship machinery – engines, propulsion system, electrical network and fuel system – according to command priorities. Another AI, called IBIS (Internal Battle Intelligence Reinforcement Learning for Damage Control and Firefighting) was conceived by Frazer Nash Consultancy as a predictive damage control tool using novel AI-based reinforcement learning techniques.

The Intelligent Ship team also selected a Decision Lab-developed AI known as CIAO (Advanced Compounded Intelligent Agents for Optimisation) that could be employed to arbitrate conflicting outputs delivered by two different agents. For example, it might come into play if TacNav recommends a course based on underwater obstacles or local shipping traffic but a TEWA agent suggests an alternative course in order to open weapon arcs against an incoming threat. CIAO was implemented in a number of parts of the system so as to offer compounded advice in different parts of the decision chain.

Personnel in the CIC of the guided missile destroyer USS Dewey (DDG-105). The coming years will see AI employed in numerous contexts across the naval domain, melding humans and machines together. (Photo: US Navy)

Command Lab

ISAIN was integrated within a Command Lab facility established at Dstl’s Portsdown West site. This facility – hosting a live, virtual, and constructive simulation made up of open and flexible hardware, software, networks, databases and protocol interfaces – has been co-funded by a number of parts of Dstl. It serves as a configurable testbed affording the capability to conduct experimentation and integrate new systems in all warfighting environments.

To support Intelligent Ship experimentation and evaluation activity, the Command Lab was outfitted with operator terminals resembling CIC multifunction consoles, allowing military advisors to interact with AI agents in a pseudo-operational setting. Four separate evaluations have been performed at the Command Lab, during 2021 and 2022, with the complexity of scenarios, the number of agents and the maturity of those agents increasing over time.

The evaluations were run against a notional scenario, developed by Dstl military advisors, which allowed ADeMs to be demonstrated in a representative operational setting. This began with a planning phase. After this, the ‘ship’ – operating ahead of a larger task group – made a transit to undertake intelligence-gathering operations close to contested waters. With tensions running high, a confrontation ensued with an adversary Red Force. This culminated in an anti-ship missile attack in which own-ship damage was sustained. For the purposes of the evaluations, this end-to-end scenario was broken down into a series of shorter vignettes, each consisting of about half an hour of ‘operational’ activity. These were scripted so as to maximise the interactions between agents.

Phase 2 completed at the end of March 2022. The research and experimentation provided valuable early insights into the opportunities and benefits of bringing multiple AI applications together to make collective decisions, both with and without human operator judgement. At the same time, it identified a number of new questions about how AI-enabled automation is best implemented and managed in a complex command environment. The conclusion was that true operational advantage could only be derived by addressing the design and operation of teams of multiple intelligent machine agents, and to enable and optimise the integration of humans within those teams to form effective Human-Autonomy Teams (HATs).

DASA, working in partnership with Dstl, announced its plans for Phase 3 of the Intelligent Ship in early 2023. Building on the collaborative AI concepts previously developed and evaluated in Phase 2, this follow-on S&T programme is being structured so as to explore the benefits of earlier and more focused consideration of the human components of a HAT to support future naval command and control.

Phase 3 aims to design an integrated system for a HAT that can deliver aspects of above-water naval command and control, and make more detailed consideration of the arbitration needs of collaborative AI-based HATs. This will drive a greater focus on systems design, as opposed to AI agent development; the integration of the human within the HAT system; and an understanding of arbitration approaches for potentially conflicting recommendations from different AI agents. The intention is that the existing ISAIN environment will be used for integration and evaluation.

A competition for Phase 3 is expected to begin in April 2023. The intention is that a single collaborative and multi-disciplinary team will deliver the entirety of all outputs, including system design, build, integration and evaluation. Current plans envisage the award of a contract in the third quarter of 2023, with Phase 3 activity expected to run through to December 2024.

Conclusion

The coming years will see AI employed in numerous contexts across the naval domain. At the same time, it is recognised that the use of AI raises a number of profound ethical, legal and governance issues. The challenge facing navies, defence science and industry today is to identify operational shortfalls and capability gaps where AI may form part of the solution, and to understand how best to meld humans and machines together so as to combine human cognition, intuition and responsibility with machine-speed analytical capabilities.

In the longer term, the introduction of AI into the command chain may demand a paradigm shift. Instead of designing systems, and then engineering an interface with a human operator, the command and control system of the future will be designed such that human and machine teaming interaction is a fundamental part of the underpinning concept and design. Furthermore, careful attention will be required to determine the optimal balance between human and machine elements within a command team across a range of operational scenarios and tasks.

Richard Scott