Fake news and disinformation are cancers afflicting societies around the world with potentially catastrophic consequences. Both are also posing increasing risks to military operational and tactical decision-making.

It was the Newsthrob website that broke the story, possibly the biggest of the war so far. The special forces soldiers were all NATO troops. The commandos stormed what was left of the primary school, shooting indiscriminately at the survivors. Minutes before, a single Boeing GBU-31(V)B Joint Direct Attack Munition 900 kg precision-guided bomb had slammed into the playground during the morning break, delivered by a NATO jet. The carnage was indescribable.

One by one, major news agencies began to pick up the story, running heavily redacted versions of Newsthrob’s video, sparing their viewers the horrific scenes. The war was controversial to say the least. Many Alliance members had seen large public demonstrations against it, which had often turned violent. Newsthrob’s story poured petrol on the tinder box. However, the website, the video and the story it portrayed were entirely fake. NATO’s adversaries had packaged the whole episode and disseminated on the internet. Social media did the rest. NATO governments now had an extra fight on their hands in the court of public opinion. Denying a story that was not true was an unwelcome distraction from NATO’s fight against murderous proxies sponsored by a near-peer adversary which had spilled across the Alliance’s borders.

Know it when you see it

The term ‘Fake News’ became popularised during the 2016 US presidential election. The National Endowment for Democracy (NED) defines it as “misleading content found on the internet, especially on social media.” This organisation is a non-profit body based in Washington DC, dedicated to the global promotion and advancement of democracy. NED’s definition adds that fake news can include “intentionally deceptive content, jokes taken at face value, large-scale hoaxes, slanted reporting of real facts, and coverage where the truth maybe uncertain or contentious.” Although the phenomenon of fake news may seem recent, it is not. Also known as disinformation, it has been with us for centuries, if not millennia.

History as precedent

On 3 June 1693, William Anderton, a printer living in London, was on trial for his life. London’s famous Old Bailey court was hearing a case which alleged Anderton had committed high treason. He had written and published a pamphlet entitled Remarks upon the Present Confederacy and Late Revolution in England. England was then in the throes of the Glorious Revolution. King James II of England and Ireland, who was also King James VII of Scotland, had been deposed in 1688. James Stuart was the last Catholic monarch of the three countries. Meanwhile, the protestant William, Prince of Orange, was crowned on 11th April 1689, becoming King William II of England and Ireland, and King William III of Scotland. Anderton was not enthusiastic about the new monarch and his pamphlet alleged that the Glorious Revolution was a foreign plot intended to plunge England and Ireland into war with France, who remained Catholic. Anderton’s defence was unsuccessful, and he was sentenced to death. On 16 June 1693, he met a sticky end, sentenced to being hanged, drawn and quartered (though in Anderton’s case, the quartering was remitted), the usual sentence for traitors, at Tyburn.

It could be argued that Anderton’s actions were a clear attempt to stir up trouble which ultimately is what disinformation and fake news are intended to do. There is a point of contention in these two terms. Our definition from the National Endowment for Democracy argues that “fake news does not meet the definition of disinformation or propaganda,” because “its motives are usually financial, not political, and it is usually not tied to a larger agenda.” Examples of fake news seem to undermine this argument. Anderton was arguably not motivated by profit to undermine William’s reign. Instead, he had a political agenda, notably the return of the Stuart monarchy. His pamphlet clearly meets the fake news definition with its “intentionally deceptive content.”

The aftermath of a devastating attack by NATO warplanes on a school. Or is it? The image is completely fake. It took the author less than five minutes to generate this picture using AI software freely available on the internet.
Credit: Thomas Withington

Fake it ‘til you make it

Recent examples of fake news are sadly plentiful. The ongoing conflict in Gaza between Hamas and Israel, is instructive. On 23 October, 15 days after the conflict erupted, Associated Press (AP) published an interesting report. AP’s work showed how serious the proliferation of fake news had become. A brace of false stories circulated on the internet such as reports that a top Israeli commander had been abducted. A video went viral on social media claiming that United States Marine Corps (USMC) soldiers were being deployed to Israel in the aftermath of Hamas’s attacks. In reality the troops in the video were from the US Army’s 101st Airborne Division. The footage was from June 2022, when the soldiers deployed to a Romanian Air Force base during the deployment of the 101st to support NATO exercises in central Europe and shore up Alliance defences. The video was publicly available and taken from a US Department of Defense website. Nonetheless, the headline accompanying it was clearly designed to provoke emotion: “Happening now: Thousands of US Marines just landed in Israel: World War 3 High Alert!”

While the National Endowment for Democracy’s definition argues that the motives of fake news “are usually financial, not political”, this does not appear to be the case for the two examples cited above. One could argue that the intention of those who disseminate such falsehoods is clearly political. The desire is to influence emotions and reactions in an already highly emotive conflict. At its simplest, examples of fake news can influence or confirm a person’s bias. Perhaps individuals are more likely to share videos, social media posts or links which confirm their own points of view. At the same time, people may discount factual content which runs counter to their preferred narrative as ‘fake news’.

A 2022 study by the University of Pittsburgh in Pennsylvania underscored the role that emotions can play in the dissemination of fake news. The study involved 879 participants during the 2020 US presidential election. Participants were shown fake news story headlines about the two presidential candidates Joe Biden and Donald Trump. Each participant was asked whether they would share a particular headline and how that headline made them feel. Around one third were prepared to share headlines which provoked a strong emotional response. Another third were upset by the headlines but were less prepared to share them, while the final third neither shared the headlines nor had an emotional response to them.

That emotion can cause a reaction is not surprising, particularly in the context of news events, yet reactions can be highly influential, particularly in democracies. Popular sentiment against the US involvement in the Vietnam War between 1965 and 1975 played a major role in the administration of President Richard Nixon seeking a way out of the conflict. Similarly, the UK’s involvement in the 2003 US-led operation to remove the Iraqi dictator Saddam Hussein from power proved deeply divisive across the United Kingdom. Fake news is arguably intended as an accelerant to inflame grievances already smouldering in a section of a polity. Fake news works to have a strategic effect. Disseminating lies is done to try and influence a government or administration to either start or desist from a specific course of action. Were false reports of a US deployment to Israel intended for consumption by those opposing Israel’s military response to the 7 October 2023 attacks by Hamas? An apparent US deployment would show that Israel and the US were in cahoots with one another. Also, both countries were jointly deploying forces against an Islamist political movement and its supporters. To be even more reductive, the fake news stoked misconceptions that a war between Judeo-Christian and Muslim civilisations was imminent, hence the “World War 3 High Alert” headline.

Worst of intentions

The employment of fake news and disinformation by Russia is well-documented. The intentions of President Vladimir Putin and his government is strategic in this regard. Lies and obscurification are used to shift blame away from Russia or to encourage a narrative suiting Putin’s interests. A blatant example of Russian strategic disinformation occurred in the wake of the Salisbury poisonings. On 4 March 2018, two Russians living in the UK were poisoned with a Novichok family nerve agent in Salisbury, southwest England; Sergei Skripal, a former spy who had worked for Russian and British intelligence, and his daughter Yulia, were both attacked. The British government conclusively ascertained Russia’s responsibility for the incident which had come close to killing the Skripals.

Alexander Mishkin and Anatoly Chepiga (a.k.a Alexander Petrov and Ruslan Boshirov), would-be assassins of the Skripals in Salisbury 4 March 2018. The pair give a deeply unconvincing explanation of what they were doing in the city on the day of the attack to Russian state television, claiming their interest in Salisbury did not extend beyond its ecclesiastical architecture.
Credit: RT

Russia’s reaction was to deflect blame. The would-be assassins were publicly named as Anatoly Chepiga, Denis Sergeev and Alexander Mishkin, all of whom were members of Russia’s GRU military intelligence service. Mishkin and Chepiga later made a surreal video under their aliases Alexander Petrov and Ruslan Boshirov. The video interview, filmed by state broadcaster Russia Today, saw the two men deny any involvement although admitting they were in Salisbury on the day of the attack. Purportedly, their motivation for visiting the English city was to admire its 13th century cathedral. Despite being scarcely believable, Russia’s intentions with the interview were to try to introduce doubt into the British government’s narrative.

Fake news on the battlefield

Tensions between NATO and Russia are clearly at their worst since the end of the Cold War. Should any military confrontation erupt between them, it is clear that significant Russian disinformation and fake news would circulate in the public strategic space prior to the conflict. Moscow would be doing its best to weaken any public support within NATO for hostilities. Once war started, it seems unlikely that the diffusion of fake news and/or disinformation would be restricted to the strategic level.

Why would Russian information warriors not also resort to using fake news to influence warfighting at the operational and tactical levels? Russia might not be the only potential NATO adversary that would resort to this course of action. Looking back over 30 years of NATO operations since the Cold War’s conclusion, would the Republika Srpska entity of Bosnia and Herzegovina, the Taliban or the Serbian and Libyan governments not have done the same? What stopped them was that the social media technology we have today was simply not available then.

Battle rhythm

The faux Newsthrob story at the start of this article has the potential to provoke a raft of consequences at the operational and tactical levels. Operational, joint force headquarters often have a silent feed from media networks running alongside large electronic screens portraying other information pertinent to the war. The author was told on a visit to a NATO headquarters that the media feed was necessary. Sometimes open sources can be the first indication of an event that might need a military response. The media feed used during this visit was the British Broadcasting Corporation (BBC).

Military command centres sometimes have a video feed from a rolling news source to keep tabs on what is happening within and outside the operational theatre. Several news feeds can be seen here on the main screen in this US Combined Air Operations Centre in Qatar.
Credit: US DoD

To be fair, such media organisations perform extensive checks on the veracity of the information they receive before it is treated as news. Typically, claims such as those made by the fictitious Newsthrob website would need to be confirmed from at least one or two other independent sources until treated as fact. Without such confirmations, the claims made by the website would be rightfully ignored. Nonetheless, what if such checks and balances break down for any reason? Perhaps the news feed in the command centre is now showing a censored version of the harrowing footage of the ‘NATO attack’ and its aftermath? At the very least, the commander and staff may pause to digest the news wondering if, and how, they may react. The worst situation is that a commander may choose to react in a certain way based on the news they have just seen. Making rash or dangerous decisions could have a profound effect on the wider operation. Once again, the speed of battle risks encouraging corner-cutting but must be resisted at all costs.

As our faked attack ‘involved’ NATO aircraft and troops, the command centre’s staff may seek to confirm the situation from their counterparts in echelons above or below. This may create a pause in decision-making while confirmation is sought. However, given the speed of contemporary and future military operations, any pause could have detrimental consequences on overall battle rhythm. Once the information is confirmed as false, officials will then have to communicate this fact with the media. The Alliance would now be having to react to an event that never even occurred. NATO would now have to be reactive as opposed to the proactive approach which sets the news agenda through operational pace.

An additional problem is that the Newsthrob video is now being shared across social media. Can NATO’s denials keep pace with the speed that the fake news is spreading? The extent of the dissemination problem is sobering. A study on the proliferation of fake news by the Redline public relations company in December 2023 said that one third of Americans admitted to unintentionally sharing false social media content. Around 47% of Americans have encountered fake news in the print media. The latter statistic may show that some conventional news outlets are less discriminating about the information they share.

Lots of bots

Individuals are not the only problem – so-called ‘bots’ are a major concern. Bots are software applications that mimic human behaviour using internet content but do so on a massive scale. They can pick up and magnify social media posts and disseminate them via a myriad of accounts, all of which appear genuine. The impact of bots was clear during the COVID-19 pandemic. In July 2020, research by Indiana University’s social media observatory revealed the extent of the problem. Researchers at the observatory created a programme called ‘BotometerLite’ designed to detect bots on Twitter (which almost no one calls X). The study determined that bots were overwhelmingly sharing COVID-19 disinformation. The proliferation of fake news by bots is made more concerning as the falsehoods they share are picked up and shared by other bots. It is somewhat ironic that disinformation about a virus spread in a similar way to how the illness behaved. Returning to our fake attack example above, a clear concern is that bots would share this false information. The fake report may have moved thousands, if not millions, of times across social media before NATO gave its first rebuttal.

Fake news and disinformation tools such as DataRobot software were put through their paces during the US Army’s Cyber Quest exercise at Fort Eisenhower, Georgia in 2023.
Credit: US DoD

This begs the question as to what militaries can do to prevent fake news and disinformation contaminating their decision-making? In August 2023, the US Army was reported to have tested an AI-based tool known as DataRobot. DataRobot trawls social media content to detect bots and so-called ‘deep fake’ content. The latter term refers to material which initially appears genuine but which in reality is manipulated or completely fake. In March 2022, a video appeared online which seemed to show Ukraine’s President Volodymyr Zelensky surrendering to Russia but which was completely fake. DataRobot focuses on a commander’s area of responsibility. The software will determine whether such materials are being generated and disseminated from and within that area. Trials of DataRobot took place during the Army’s Cyber Quest exercise that was held in 2023 at Fort Eisenhower, Georgia. DataRobot’s information can be overlaid onto existing command and control data such as battle management system cartography. Not only would such information help commanders fight disinformation, but it may help to locate its source. Knowing where the activity is originating, or taking place, could be an indicator of the presence of hostile forces or sympathisers. The disadvantage of DataRobot was that the software needed about three months’ worth of data vis-à-vis the operational area to train with to do its work. Such requirements could be problematic in a sudden ‘come as you are’ war.

During the Association of the United States Army (AUSA) annual convention held in October 2023 in Washington DC, the US company Primer showcased its Command software. As Primer’s own website notes, Command provides “an operating picture of news and social media.” The idea is for the user to be able to not only see news events but also online reactions to these events unfolding in real time. Rather than having to continually monitor this data, the software’s algorithms collates and summarises these. Crucially, Command can be used to “(m)onitor adversarial narratives and the key influencers behind them” says the company. Users can “uncover their motivations, affiliations, and influence networks. Evaluate the authenticity, trajectory, and impact of these narratives to make informed decisions on next steps.”

Usefully, the information collated by Command can be interrogated and the source of that information geolocated. The user may want to know social media reactions to a specific aspect of a particular event for instance. Taking our above example, it might be useful to know how people in a particular area close to the fake attack are reacting. Are those who had been supportive of NATO’s actions now sharing social media content critical of the Alliance. If this is the case, what effect might this have on the wider tactical or operational battle? Command can also be customised and tailored to the user’s requirement. Primer was contacted on numerous occasions during the preparation of this article, but the author received no response to their inquiries.

Primer’s Command software uses AI techniques to collate and present information on news feeds and social media activities across specific areas. Such tools are invaluable in helping commanders comprehend the extent of any fake news and disinformation threats they face in their area of responsibility.
Credit: Primer

The human factor

Command and DataRobot are both steps in the right direction. The reality is that militaries will have to face the dissemination of fake news and disinformation in future conflicts potentially affecting decision-making. Much as technology is the vector for the threat, so it offers potential solutions. Nonetheless, good old common sense is probably the most important first line of defence. What is the source of the information? Has it been corroborated? What does your intuition tell you? Anyone with a vague awareness of Zelensky’s feelings towards Russia would have found the surrender video dramatically out of character. This incredulity should serve as the initial alarm bell. Combining human judgement with trusted technological solutions could prove to be useful weapons in the battle against fake news.

Thomas Withington