Project Linchpin is an important but little-known initiative led by the US Army’s Program Executive Office for Intelligence, Electronic Warfare, and Sensors (PEO IEW&S). It aims to “deliver trusted AI” capabilities “across the Army enterprise”.
PEO IEW&S is focused on continuing what it calls “the long-term Campaign of Learning for Intelligence and Sensor modernization, including development and integration of AI/ML capabilities to meet operational requirements.”
Project Linchpin (PL) is a collaboration between Army Futures Command’s AI Integration Center; Army Research Labs; Development Command; Chief Data and AI Office (CDAO) within the Office of the Secretary of Defense. Through innovation within what it calls its “AI Operations and Services ecosystem”, PL provides a framework to leverage partnerships, innovative acquisition processes, standardization, security, and regulatory compliance. The overarching goal is to “ensure the effective and trustworthy integration of AI technologies”.
When he served as Army Assistant Secretary for Acquisition/Logistics/Technology, Doug Bush said that “the pace of innovation taking pace [with AI] — it’s not slowing down, it’s accelerating…How do we make sure Joint All-Domain Command and Control elements are continuously open to state-of-the-art technologies?… Getting comfortable with [speed] should be normal…once you focus on discreet tasks [like Sensor AI] it’s a lot more plausible.”
Project Linchpin overview
PL has four key objectives:
- A Trusted AI Ecosystem: PL aims to build a secure and trusted environment for the entire AI lifecycle, from training algorithms on Army data to deploying them into the field.
- Standards and Open Architecture: A major PL focus is on developing common, foundational standards for data, ontologies, and labelling, creating an open-source architecture that allows for interoperability and streamlining AI development across different Army programmes.
- Security and Data Lineage: PL emphasises security by design, including the use of ‘Software Bill of Materials’ to document components and track data lineage, ensuring the trustworthiness of the AI models.
- Collaboration: PL wants to foster a collaborative model – between industry and the Army – for creating a competitive environment for third-party integration into Army modernisation programmes. New industry partnerships would foster more innovation.
PL’s pipeline starts with collecting, storing, and validating large datasets relevant to intelligence, cyber, and electronic warfare sensors. Data is transformed and features are engineered to prepare it for model training. AI/ML algorithms are trained within the secure environment, leveraging data and defined standards. Trained models are then deployed into cloud environments and battlefield networks for use in the field. The pipeline includes a mechanism to continuously monitor model performance in production and refine algorithms based on user feedback.
![A PEO IEW&S graphic illustrating the objectives of Project Linchpin. [PEO IEW&S]](https://euro-sd.com/wp-content/uploads/2025/11/Photo-2-Kopie-1024x552.jpg)
Lessons from elsewhere
The author asked Sek Chai, co-founder/CTO of Latent AI Corp., about lessons drawn from elsewhere (especially governments, the corporate world.): “PL is a bold effort to standardize a Machine Learning Operations (MLOPs) framework that supports model and data integrity, and modular open system architecture (MOSA). At the very early stage, PL defined specific principles of Traceability, Observability, Replicability, and Consumability (TORC). These principles ensure interoperability for rapid fielding of AI models.”
Chai went to say that he finds PL to be important because it “recognises the need to innovate at the startup speed, and to meet the breakneck pace of AI innovation. As such, PL makes a bold step to move away from traditional DoD acquisition process, and overreliance on legacy sole-source integrator roles”. Startups like Latent AI offer dual-use technologies which PL can access, without being locked into a one-size-fits all solution.
Some PL advocates worry about obstacles which stand in the way of PL realising its core aim: delivering trusted AI capabilities across the Army enterprise. To understand what could weaken PL’s ability to reach its stated goals, we turned to Dr Alan R. Shark, Associate Professor at George Mason University Schar School of Policy and Government. He co-chairs the National Academy of Public Administration’s Standing Panel on Technology Leadership.
Two major items stand out to Shark, if not adequately addressed. First is data access, quality and diversity: “Despite vast amounts of data, the Army faces significant challenges in data accessibility and quality. Successful AI technology data pipelines rely on rich, representative datasets—but Army sensor data has been historically siloed, unlabeled, or ‘dirty’. Without high-quality, standardised access, training, and validation of AI models will be a struggle”.
The second issue for Shark is one that continues to plague the military (and all public institutions) – a shortage of talent and expertise: “AI and machine learning operations require specialised skills in algorithms, data engineering, system security, and deployment. The Army currently lacks sufficient in-house talent with these capabilities and must rely heavily on outside contractors, compounding the complexity of centralising and standardising machine learning operations.”
Beyond PL, many have voiced concerns regarding having one military organisation (the Army) go it alone. Shark notes that, “while most agree that the general direction of PL is admirable and desired, it will make it challenging to deploy AI technologies across the other armed services platforms.”
Chai has a different view: “PL’s effort to decompose components within the MLOps pipeline, and also decouple AI from software applications, are already seeing adoption across Army Enterprise. PL’s impact runs across a wide range of mission needs, and there are still areas that will need more time to see adoption of AI. The main challenge may be the time for adoption, because AI technology continues to evolve rapidly.”
![Given AI’s centrality to so many projects, such as swarming unmanned aerial vehicles, it is likely to remain relatively safe from cuts. [US Army;SSgt Jacob Slaymaker]](https://euro-sd.com/wp-content/uploads/2025/11/Swarm-Ground-View_US-ArmySSgt-Jacob-Slaymaker-Kopie-1024x768.jpg)
PL aims at rapid development, integration, and deployment of advanced analytics to sensor modernisation efforts. There are precedents which help to understand some of the barriers and likely hiccups. Chai notes that “PL is guided by earlier DoD efforts such as Project Maven and the Joint Artificial Intelligence Center (JAIC), which was later integrated into the CDAO. Lessons learned from the challenges to accelerate the delivery and adoption of AI, help shape the way AI requirements are defined, and how models are procured.”
Chai is also focused on the commercial sector: “Large enterprises have access to tools and technical infrastructure that are both centralised and distributed. Critics that are concerned with prohibitive cost are likely narrowly thinking of legacy enterprise approaches of AI/MLOps. The counter argument is that PL has opened up more opportunities for the commercial sector to offer their AI/ML solution because of standardisation and interoperability requirements. The Army can have a much more diverse offerings and competition to offer the best-in-class models for specific needs. The technical infrastructure will still be needed for models and data that need security clearances, but the majority of the model can be developed beforehand.”
The 80-year-old Parsons Corp. entered into a Cooperative Research and Development Agreement (CRADA) with the U.S. Army Command, Control, Communications, Computer, Cyber, Intelligence, Surveillance and Reconnaissance (C5ISR) Center. The Army turned to Parsons because the company has decades of experience supporting the Army, other DoD organisations, and the Intelligence Community. Under the terms of this CRADA, Parsons examines diverse datasets, models, and deployment scenarios to determine the optimal metrics, specifications, and governance for AI/ML operational use cases – all provided by PL. In tandem with C5ISR and PL, Parsons is collaboratively researching deployment and model monitoring standards. Their shared goal is to support streamlined ML operations across the Army’s full ecosystem, and to integrate AI/ML effectively into Army ‘Systems of Record’.
For context, ‘Systems of Record’ are the single, authoritative data source for a given data element. They hold the most accurate, reliable, and up-to-date information. The purpose of Systems of Record is to prevent data discrepancies and confusion that occurs when information is duplicated across multiple systems. They provide a ‘single source of truth’ for critical business or mission data. Systems of record in data management are characterised by strong security, controlled access, auditable history, and high reliability.
Looking back at one piece of the history
PL’s predecessor was Project Maven, which successfully introduced AI and machine learning (ML) into military operations. Among Maven’s significant goals were enhanced intelligence capabilities, faster decision-making, and ethical debates over autonomous warfare. Officially launched in 2017, the programme has evolved into a persistent capability known as the Maven Smart System (MSS). It was marked by several controversies, including Google employee protests and resignations – which forced Alphabet, the parent company, to declare in 2018 that it would not renew its DoD contract to work on Maven. The contract was subsequently taken over by Palantir. On 25 March 2025, the NATO Communications and Information Agency (NCIA) finalised the procurement of the Palantir Maven Smart System NATO (MSS NATO) for employment within NATO’s Allied Command Operations (ACO), headquartered at Supreme Headquarters Allied Powers Europe (SHAPE).
![A CG rendering of what someone using Maven would look like. In this case, a unified tactical display provides insight on vessels across the world, the Maven programme overlays real-time ship readiness and sustainment data right at the user’s fingertips through AI. [Palantir]](https://euro-sd.com/wp-content/uploads/2025/11/Maven_Palantir-Kopie-1024x501.jpg)
By delivering AI-driven insights to commanders in near real-time, the programme accelerates the observe, orient, decide, act (OODA) loop. The Maven Smart System (MSS) can analyse data from multiple sensors and provide battlefield overlays that mark potential targets and assets, allowing for quicker decision-making.
MSS has been deployed in various combat and intelligence-gathering scenarios:
- Middle East: to identify targets for airstrikes in Iraq and Syria and to locate rocket launchers in Yemen and naval vessels in the Red Sea.
- Ukraine: to help Ukrainian forces track and target Russian equipment.
- Afghanistan: to support logistics and track aircraft and personnel during the evacuation.
MSS has also been used in non-combat scenarios, such as disaster relief efforts, to help organise and visualise logistics data. The programme created a model for a hybrid approach where AI makes predictions, human operators provide corrections, and the system learns and becomes more accurate over time. This collaborative process has proven effective in increasing model accuracy.
MSS demonstrated that the Pentagon could acquire and field technology far more quickly than its traditional procurement processes. The initial goal was to deploy the technology within six months. The project highlighted the US military’s need for private sector technology, especially from major tech companies. This led to significant new contracts with tech firms for AI and cloud services, though it also revealed friction between Silicon Valley culture and military ethics. Google’s protest focused on the ethics of building AI for drone surveillance.
MSS represented the Pentagon’s first major foray into algorithmic warfare, operationalising AI innovations like computer vision into military doctrine and setting a precedent for future AI-driven capabilities.
Two views from academia
We asked for a perspective from one of the notable experts in the world of academia, Professor Andrew Reddie, Founder of the Berkeley Risk and Security Lab at the University of California. He sees PL as reflecting a view, widely held across the government and military, “that significantly more needs to be done to leverage AI tools in the service of national security for use cases as varied as intelligence, surveillance, and reconnaissance to decision support to kinetic applications of force”.
From Reddie’s vantage point, “much of what has been described inside of PL reflects a decades-long effort to make the best use of the varied sources of data in the service of the warfighter, reflected in Linchpin’s focus on C5ISR”. What is potentially new here, he thinks, “is the method of integrating these tools with Linchpin, described as a modular architecture (a ‘system of systems’) with each AI model going through testing, evaluation, and validation for a specific application before being knitted together into a single architecture”.
Reddie concludes that “much will depend on how relevant these tools will prove to be to the warfighter as they work in environments that tend to be inhospitable to technology (as evidenced by numerous past efforts to integrate basic IT”. There are also more prosaic concerns regarding PL “that reflect long-held discussions concerning how AI tools are used in military contexts – namely, the conditions under which human oversight is necessary and what this form this oversight takes in order to ensure meaningful human control. I will also note that a modular architecture does not necessarily deal with the risks posed by failures generated by the very integration of the system of systems”.
![Sailors conduct training with an Anduril Dive-LD Unmanned Undersea Vehicle (UUV), in Keyport, Washington, 11 December 2024. Given the wide range of applications for unmanned vehicles, from conducting surveillance to kinetic strikes, a degree of human oversight will necessary to mitigate some kinds of risks. [US Navy/Loren Nichols]](https://euro-sd.com/wp-content/uploads/2025/11/Dive-LD_US-NavyLoren-Nichols-Kopie-1024x684.jpg)
In that case focusing resources on capacity rather than capability gave the US and Soviets a decisive advantage. Boyd sees this “in the way the DoD has been discussing AI. We love building apps, but not so much building the infrastructure to support and scale them. An AI application that does something like ATR (automatic target recognition) is a capability”.
Boyd continued: “Let’s assume this application works well identifying the desired target.” Because it is mostly software (possibly dependent on a particular sensor), it scales easily. That scaling gives capacity. Meaning we can do a lot of a certain capability. That’s good however the enemy gets a vote. Most likely he will alter whatever signature his target has which means our AI-enabled ATR data set will quickly be obsolete.”
Boyd asked: how do we update that ATR software capability on all the systems it rides on so that there is no lag in our warfighting capacity? His answer: “This is where our infrastructure shortcomings come sharply into focus. The transport layer is insufficient to do over the air (OTA) software updates to edge models, so those systems will have a lag as they are updated. Same thing with onboarding new models. This problem cascades down through power (logistics for batteries, compute for cloud infrastructure and so on). So I, and many others, have been trying to encourage the military to put a larger slice of their resources into infrastructure rather than applications. This sustains capacity in conflict but also enables adaptation before and during conflict. New applications can get to the edge faster. Project Linchpin, of which I have no direct personal experience, appears to be an attempt to focus resources towards the infrastructure component of capability development and fielding, and can probably be leveraged for capacity resilience. If I were them, I might even change the nominal focus from AI to just software. This is not an AI problem it is a software and data problem. AI is just a type of software. So, Linchpin seems like a good start though the products I have seen are very general and so don’t illuminate much on what exactly they are trying to do.”
![Deception tactics, such as the use of decoys have a long tradition on the battlefield, and no doubt continue in the age of autonomous weapons. Ensuring that the software is able to adapt to battlefield changes will therefore be vital. [US Army/1Lt Tam Le]](https://euro-sd.com/wp-content/uploads/2025/11/Decoy_US-Army1Lt-Tam-Le-Kopie-1024x678.jpg)
Boyd’s conclusion: “Our modern military knows it needs to become more software defined and it is moving that way in fits and starts. What the military is starting to figure out, demonstrated by projects like Linchpin, is that a software defined military needs a new kind of infrastructure to support it. That infrastructure does not exist. Linchpin seems poised to address that. We’re way behind but the military is learning. Changing from hardware defined to software defined is hard. But I’m encouraged that they are recognising the problem and working to solve it.”
As far as centralisation is concerned, Boyd’s answer was: “Some things should be centralised and some shouldn’t be. Local infrastructure should be designed and built locally to centralised standards. Large ‘public works’ projects can be centralised to the joint force. We don’t need the services building unnecessary things due to parochialism. That said, the Army probably should defer to naval experts on how to do undersea software infrastructure.”
Gordon Feller


![Project Convergence 21 - Artificial Intelligence Integration Center US Army Maj Eric Sturzinger, assigned to Artificial Intelligence Integration Center, gives a capability brief during for Project Convergence at Yuma Proving Ground, Arizona, on 20 October 2021. [US Army/Spc Kayla Anstey]](https://euro-sd.com/wp-content/uploads/2025/11/Photo-1-Kopie.jpg)








