European systems house Thales is soon to publish a white paper on how human-machine teaming (HMT) can enhance decision making and operational agility in defence, to which ESD has been afforded a preview viewing.
Noting that the embedding of artificial intelligence (AI) and autonomous systems across modern defence systems is leading to a profound shift in the way decisions are made and missions are executed, the white paper argues that HMT offers a structured and ethical approach to integrating AI into operations, preserving human judgement while at the same time unlocking the advantages of intelligent systems.
As defence organisations seek ways to outpace adversaries, reduce the human cognitive burden and improve situational awareness capabilities, the white paper notes that AI and autonomy are central to this ambition, but warns that their integration must be carefully managed to maintain operational control and uphold legal and ethical standards.
In defining HMT, the Thales white paper states that, at its core, HMT “is about integrating machine intelligence into the decision-making chain in a way that enhances rather than undermines human authority”, thus striking the right balance between the capabilities and limitations of humans and the technologies they use.
The paper defines three main HMT models:
- human-in-the-loop (HITL), where humans retain full control over key decisions, with AI offering recommendations or processing support: a model common in targeting and surveillance systems;
- human-on-the-loop (HOTL), where AI executes certain tasks independently, while humans monitor and retain the ability to intervene: a model suitable for time-critical or high-tempo missions;
- and human-out-of-the-loop (HOOTL), in which AI operates autonomously without real-time human supervision, usually in tightly bounded, pre-approved scenarios.
Referring to what is known as a sociotechnical system – in which both ‘social and ‘technical’ elements must be considered together because arrangements that are optimal for one may not be optimal for the other – the white paper suggests that a user-centred design approach can help with and ensure the ethical design of systems.
Speaking to ESD on 7 July 2025, prior to the white paper’s publication, Mark Chattington, technical director for Thales’ Research Technology and Solution Innovation division, emphasised how vital it is to bear users in mind and what they are trying to achieve in order to safeguard against any ethical distortions.
“There’s a couple of things to worry about,” he explained. “One is the output and worrying about the whether the end result of what you do now is is ethical; that’s one thing that probably most people focus on. The thing that we’re also mindful of, and have discovered in more detailed analysis of how AI could be implemented, is that, if I don’t fully understand what you were doing, how you were processing information, how it informed the decisions that you were taking, I could fundamentally change your job as well; I could inadvertently cause ethical issues, which will be missed purely by [focusing on] ‘I can help you with your output’.
Chattington further warned about the intricacies of AI implementation. “I can have an ethical module and put it into a platform,” he said, “but change the way a user operates and end up with a system that is unethical, or could potentially be unethical, in how it operates.”
The Thales white paper also noted that, while AI is particularly effective at managing and interpreting large volumes of data – such as sensor feeds, satellite imagery and open-source intelligence – such power introduces new risks.
“Black-box AI systems that cannot explain their reasoning undermine trust, while data bias or poor training can result in flawed decisions,” the paper noted, adding that “there is a risk that we look to AI to solve all problems when in fact alternative mitigation strategies may be more appropriate”.
The paper added that, to be effective in mission-critical settings, AI systems need to be explainable, reliable and aligned with a commander’s intent and broader mission objectives.
The paper also warned that the introduction of autonomous systems and AI could increase the risk of complacency, over-reliance and skill fade, while asserting that the most effective HMT systems support intuitive decision-making under pressure while preserving an operator’s moral agency.
Regarding the operational implementation of HMT, the white paper identified a number of barriers to its seamless integration across defence systems:
- the lack of a unified architecture that hinders multi-domain operations and leaves UK Defence exposed to adversarial threats;
- the complexity of integrating AI into existing infrastructure;
- and human factors, whereby operators must be trained not only to use AI tools, but to understand their limitations and properly interpret their outputs.
Looking forward, the Thales white paper noted that HMT “directly supports the UK Ministry of Defence’s vision for a modernised, digitally enabled force”, adding that the UK Strategic Defence Review, published on 2 June 2025, has placed strong emphasis on AI, data exploitation and multi-domain integration.
- AI research and development focused on mission-specific autonomy, predictive analytics and adaptive learning;
- human factors research to understand how operators interact with AI tools;
- and testing and evaluation frameworks that simulate real-world conditions and build confidence in HMT systems.
In conclusion, the white paper asserts that HMT teaming “is not a future ambition; it is a present necessity”, adding that, “as the pace and complexity of operations increase, defence forces must adopt technologies that support faster, more informed decision-making while preserving the moral and legal accountability of human command”.
Thales is currently working to integrate AI and HMT in numerous areas. Prominent among these is the company’s DigitalCrew concept: a collection of algorithms that reduce the cognitive burden of armoured vehicle crews to absorb the information presented to them by a vehicle’s various sensors.
Speaking to ESD in early December 2024, Stewart MacPherson, head of digital strategy with Thales UK’s Optronics & Missile Electronics business, explained that machine learning-based algorithms, primarily based around convolutional neural networks (CNNs), offer a step change in situational awareness technology by being able to recognise imagery from sensors and contextualise it. This, said MacPherson, means that the software “now has eyes” and will thus lead to changes in how situational awareness systems are developed.
“The human crew can only process so much data, so the DigitalCrew needs to step in at some point,” said MacPherson.
The machine-learning algorithms of DigitalCrew are particularly effective with regard to object tracking – for example tracking the movement of small UAVs at distance in a way the human eye simply could not – and object classification, especially since DigitalCrew is ever-present across all of the wavelengths of a vehicle’s sensor technology.
The Thales cortAIx initiative is designed to develop AI solutions that will enhance decision-making for human operators, even under the most challenging and constrained circumstances; improve the performance of the most advanced systems; and ensure that AI is deployed ethically, securely, and transparently.
Thales has asserted that it is committed to growing the UK’s AI talent pipeline and has stated that, by the end of 2025, cortAIx in the UK will sustain 200 highly skilled AI and data-specialist roles, supporting the UK government’s vision for AI-driven growth and productivity.