From the French Navy’s embedded data hubs to predictive maintenance software on US vessels, artificial intelligence (AI) is already changing maritime warfare. Yet widespread deployment remains limited by concerns over trusting the algorithms, ethical frameworks, and the complex challenge of making allied systems work together.
“CommandGPT, considering submarine threat indices, minefield charts, weather forecasts and current fuel estimates, recommend the optimal transit route, through or around choke point Charlie, to minimise detection and maximise safety.” A blinking cursor. Five seconds later: “Transit from Point Alpha (37°00′N, 20°00′E) to Point Bravo (36°30′N, 22°00′E) via the Western Shallow Corridor, keeping to a 10–12 kn speed profile. This plan minimises exposure to submarines and mines, optimises fuel use given current weather, and incorporates built-in checks to address uncertainties. Safe transit.” At the bottom, an option appears: Review System Reasoning.
While ‘CommandGPT’ may be fictitious (for now!), artificial intelligence (AI) is no longer a futuristic concept in maritime warfare. Its integration into command systems – particularly in data fusion, threat evaluation, and decision support – has already begun to shift the tempo of naval operations. The real question is no longer whether AI can support command decisions, but how far it can – and should – go.
AI in command: From capability to complexity
That AI offers ship commanders a significant tactical edge is now a well-established fact. Integrating AI in command and control (C2) is about “enabling navies to make faster, more informed decisions in complex and contested domains”, Graeme Nayler, BMT’s Asia Pacific Regional Business Director, wrote to ESD.
Maritime theatres are fast becoming increasingly saturated, ambiguous, and contested, often involving both conventional threats and hybrid tactics. Xavier Mesnet, Naval Segment Director at Thales, told ESD that customers report theatres of operations having become more complex, “featuring faster, stealthier and more saturating threats”. To address this, platforms – both manned and unmanned – now feature a vast array of sensors; these include acoustic, radar, IR, electromagnetic that capture a real-time, multi-domain operational picture. The trade-off, however, is a deluge of data that is increasingly overwhelming C2 crews and systems.
AI offers solutions to some of these problems. Algorithms are already improving sensor discrimination, enhancing radar performance, and fusing data at the system level to deliver more coherent, timely situational awareness. It also enables predictive maintenance and, in the case of uncrewed systems, limited autonomy.
For example, the French Navy began experimenting with the integration of AI in C2 systems in 2020 with the creation of a naval data and AI service centre in Toulon (Centre de Services de la Donnée et de l’Intelligence Artificielle Marine – CSDIAM). In 2023, the FREMM anti-submarine warfare (ASW) frigate Provence was the first French Navy ship to sail with an embedded data hub (Data Hub Embarqué – DHE). Developed as a joint effort between the French directorate for armament (Direction Generale de l’Armement – DGA), Thales, Naval Group and the French Navy, the DHE is a system that collects all the sensors’ raw data and features AI algorithms allowing the Navy to exploit that data in ways not initially foreseen in the combat management system (CMS). “DHEs are effectively onboard AI stations that allow Navy personnel to review system outputs, annotate the data, and contribute directly to the refinement of algorithms,” Vincent Gicquel, R&T and Innovation Director at Thales, explained to ESD.
Following successful trials aboard Provence, the French Navy began integrating DHEs across the Clemenceau 2025 Carrier Strike Group, on the Charles de Gaulles aircraft carrier and supporting frigates, but also on a submarine, helicopter and maritime patrol aircraft. “The integration of the DHE has allowed us to see and understand some tactical situations much more clearly,” Captain Xavier, from the French Navy, told ESD. “It gives us greater information superiority and more warning, which in turn grants us greater tactical superiority.”
Yet while AI has begun to compress the OODA (observe, orient, decide and act) loop for individual ships, scaling this advantage across entire fleets–or coalition forces–raises more complex questions. The real friction begins not in the combat information centre, but in the interstices between navies, doctrines, legal frameworks, and operational cultures.
As Mesnet from Thales pointed out, “To operate a drone, multiple people are actually required,” revealing the human infrastructure still anchoring autonomous ambitions. Now multiply that across allied navies – each with its own data sets, doctrines, and definitions of trust – and a new kind of interoperability challenge emerges, one that computing power alone can’t solve.
Three issues in particular sit at the heart of this tension: how AI is trained, how its outputs are trusted, and how its use is governed – especially across multinational operations. Each has technical, operational, and political dimensions. Each could determine whether AI becomes a true force multiplier for joint task forces – or a source of fragmentation when cohesion is needed most.
Trust: The hard currency of AI in defence
One of the main barriers to wider adoption of AI in defence is trust. Without it, the role of AI is unlikely to extend much beyond decision-support, with a human permanently kept in the loop.
As Captain Xavier of the French Navy put it, “If tomorrow AI is used to suggest and deliver a lethal action, we must be certain that the AI did not make a mistake.” In civilian contexts, an AI misidentifying a cat as a dog may be an amusing glitch. In defence, the consequences could be catastrophic. Navies must know that the algorithm used the right data, interpreted it correctly, and acted within the parameters of lawful and ethical conduct.
Part of that trust comes from understanding how the algorithm was trained – and with what data. Yet this introduces another core challenge: most of the data relevant to naval operations is classified. Industry leaders developing AI algorithms can only train their models using open-source datasets, such as weather or publicly available legal frameworks, as well as what navies are willing to share. “It becomes a matter of client maturity,” Gicquel told ESD, referring to a navy’s willingness and ability to take ownership of its data pipeline.
The French Navy has begun addressing this through two key initiatives. First, it has equipped ships with DHEs that give crews full autonomy over their onboard data. This frontline input is then processed by the CSDIAM in Toulon, in coordination with France’s ministerial agency for defence AI (AMIAD).
A second challenge affecting trust lies in the constant evolution of AI systems. As threats evolve, so too must the algorithms. “While navy ships’ lifecycle is 30 years, today’s algorithms and systems are constantly evolving to match emerging threats,” Julien Servel, Air and Surface Innovation Manager at Naval Group, told ESD. This dynamic introduces cybersecurity concerns – specifically, how to ensure the integrity of the data during updates. Strict access protocols, secure cloud infrastructures, and multi-layered verification mechanisms are critical to preventing compromise.
One area where data availability is less constrained is simulation and synthetic environments. In ‘AI at War’, Tangredi and Galdorisi argued that “if one had a suitably robust simulation of naval warfare and nearly infinite training time available, one could solve naval warfare with these methods.” While that capability did not yet exist in 2021 when the book was published, progress is being made. The Defense AI Observatory (DAIO), in collaboration with the German Army Concepts and Capability Development Centre (ACCDC), has developed a synthetic training project known as the Defence Metaverse. According to DAIO co-director Dr Heiko Borchert, the team used open-source terrain data, publicly available information on adversary air defences, and other inputs to build a digital twin of a notional battlespace.
Ethics: Drawing the line between assistance and autonomy
For all the excitement around AI in defence, actual implementation – particularly in naval operations – has lagged behind the rhetoric, especially in relation to one fundamental aspect: ethical frameworks.
Ultimately, AI is all about autonomy, with several subcategories defined by according to Tangredi and Galdorisi. These include trusting an algorithm to sift through a large volume of data and do calculations faster than a human being ever could (simple AI), learn and self-programme within a very specific and pre-determined range of activities (narrow AI), or even mimic human decision-making for multiple tasks or concepts (general/strong AI).
Yet with autonomy comes the question of control. As Dr Borchert explained, navies can shape AI’s decision-making boundaries in multiple ways – from requiring a single confirmation before firing, to mandating several, with a human always making the final call. “There are different levels of making the straightjacket [around AI] narrower or giving the system more freedom to manoeuvre,” he told ESD. Which leads to the central question: how much freedom are navies actually willing to give?
In Western armed forces, the answer remains firmly within the ‘human-in-the-loop’ model. Ethical doctrines consistently stress the need for meaningful human control, legal accountability, and alignment with international humanitarian law. France was one of the first to formalise this with its 2019 strategy paper ‘AI in Support of Defence’, which clearly articulated the need for human oversight. In her April 2019 Saclay speech, then-Army Minister Florence Parly underlined the importance of retaining command responsibility.
The US followed suit. Its 2023 Data, Analytics, and AI Adoption Strategy explicitly foregrounds Responsible AI (RAI), describing it as “a journey of trust” anchored in guidelines, accountability, and human integration. The UK’s 2022 Defence AI Strategy echoes this position, with “human-machine teaming” as the default, guided by principles developed with the Centre for Data Ethics and Innovation.
However, not all countries take the same view. Russia, while officially referencing the need for ethical norms in AI development, has repeatedly advocated for Lethal Autonomous Weapon Systems (LAWS), and its national code of AI ethics applies only to civilian applications. As Dr Katarzyna Zysk notes in her chapter within ‘The Very Long Game – 25 Case Studies on the Global State of Defense AI’, published in 2024, Russia’s military AI posture remains ambiguous, but clearly leans toward greater autonomy. Similarly, author John Lee argued within the same volume that while Chinese leadership regularly promotes “safe, controllable AI,” the PLA operates largely outside China’s civilian data governance frameworks, making oversight difficult to assess.
This divergence in ethical and doctrinal approaches has a direct operational consequence: very few navies have actually fielded AI in meaningful C2 roles. According to Dr Borchert’s introduction in ‘The Very Long Game’, only a handful of countries – Denmark, Italy, Russia, Israel, China, Japan, and Singapore – have deployed AI systems in the field. In most cases, the domain was likely land or air, while naval use remains minimal. The US Navy, for instance, only recently installed an AI system aboard a surface vessel, the USS Fitzgerald, and even then, it was limited to predictive maintenance.
Beyond the ship: AI at the coalition scale
Dr Borchert told ESD that one of the more surprising findings during synthetic training of AI-enabled drone swarms was this: it’s not mass that matters, but the smartness of the mass. In those simulated environments, drones operating autonomously within a swarm learned that coordination with partners was more effective than pursuing isolated objectives. Through collaborative behaviour, they reduced the size of the swarm and improved mission success.
The same principle applies to navies and their fleets, especially at a time when operational demands are stretching resources across multiple theatres, from conventional engagements to hybrid threats such as undersea infrastructure sabotage. AI for C2 and battle management has shown promise – not just at the level of individual platforms, but also, as demonstrated in the French Navy’s Clemenceau 2025 deployment, at task group level.
Yet success comes more easily when every ship processes similar data and follows the same ethical logic. What happens when multiple AI-enabled platforms must operate together–despite having been trained in different environments, under different conditions, with different assumptions?
As Naval Group’s Julien Servel noted: “Each AI algorithm we sell to our client needs to be adapted to their context, because navies don’t all navigate in the same environments, and therefore the same conditions.” In practice, this means individual navies are given autonomy to train and refine their own AI systems without oversight or shared frameworks. The result is a growing patchwork of algorithms, each optimised locally, but potentially incompatible operationally.
This becomes even more complex when allies assign different levels of autonomy to their systems. Can a nation relying on tightly constrained AI trust outputs generated by a partner’s more autonomous system? Will that partner’s AI interpret shared data as intended? What happens when one navy sees AI as a tool, and another sees it as a tactical co-pilot?
There are also significant disparities in maturity and adoption. As French Navy Captain Xavier explained, part of the Clemenceau deployment’s objective was to familiarise crews with AI across a broad operational base. “There are over 3,000 people in a carrier strike group, so the catchment was very large,” he told ESD. However, not all nations have the scale, nor political will, to roll out such experiments.
Recognising these challenges, NATO has begun laying the groundwork for an AI-enabled future that is both interoperable and accountable. In recent policy work, the Alliance has emphasised the need for “AI-enabled decision support” that is explainable, auditable, and traceable. Yet as Servel pointed out, ambition must be matched by infrastructure: standardised data models, robust training environments, and trusted certification processes. Without these foundations, coalition AI will necessarily remain fragmented.
Ultimately, operationalising AI in maritime command is no longer just a technical challenge, it’s a strategic imperative. Individual navies may develop competent systems, but unless their AI can operate seamlessly across platforms, partners, and protocols, its value in coalition warfare will remain limited. The real strength of AI lies not in the system alone, but in the ability of systems, and states, to think and act together. Because in the battlespace of the future, it won’t be the smartest algorithm that wins – it will be the smartest team.