PART 3: AI Defence, Persistent Conflict, and Complex Systems Warfare
Integrated Campaigning and Cross-Domain Synergy
This series explores how frontier AI and sub-threshold statecraft are dissolving the old peace–war divide, and sets out key concepts defending open societies amid complex-systems warfare.
Part 1: War is Not How Wars are Waged
Part 2: Persistent Competition and the End of the Peace–War Dichotomy
Part 3: Integrated Campaigning and Cross-Domain Synergy
Part 4: Frontier AI, Control Dilemmas, and the Race for Supremacy
Part 5: Offensive AI, New Weapons, and New Risks in Escalation
Part 6: Defensive AI, Resilient Infrastructure, and Safeguarding Society
Part 7: Conclusion (Navigating an Unseen Battlefield)
Part 8: Appendix — The Human-AI Relationship
To prevail in a protracted grey-zone competition, nations must employ integrated campaigning: the orchestrated use of all instruments of national power across every domain of operation. Rather than treating military, diplomatic, economic, cyber, and informational efforts as separate silos that only coordinate in wartime, integrated campaigning treats them as parts of a single, continuous campaign plan. The goal is cross-domain synergy: actions in one sphere create advantages in others, adding up to an outsized strategic effect.
What does this look like in practice? Consider an illustrative package of moves by a state actor against a rival:
Informational domain: The campaign might begin by seeding rumours and conspiracy theories on social media to inflame public discontent in the target country. AI-powered propaganda networks push tailored messages to certain demographics, aiming to “drive grassroots protests” and social unrest. For example, fake news about government corruption or culturally divisive issues is amplified by swarms of bots (some human-like chatbots) to erode trust in authorities.
Cyber domain: At the same time, cyber operators work to disrupt critical services. They might launch a attack on the nation’s banking system or commercial base – perhaps not a blatant outage, but something like disrupting payment processing, stealing customer data, holding access to ransom. This financial chaos may prove to be a compounding perturbation, frustrating citizens and companies at a time when tensions are already a little higher.
Economic/Logistics domain: Meanwhile, behind the scenes, the adversary uses economic statecraft to tighten the screws. It applies pressure through trade policy or leverages control of a supply chain chokepoint to reduce the target nation’s access to some vital good. Over months, for instance, it might manipulate shipping and contracts to cause fuel shortages in critical commodities or to alter prices.
Military/Paramilitary domain: All the while, conventional military force stays in the background as an implied threat – perhaps through an “exercise” near the rival’s border, perhaps with Claim-Enforcement Patrols,1 or by covertly deploying mercenary units (the now infamous “little green men” strategy)2 These moves deter targets from cracking down too hard (for fear of escalation), may assist proxy forces or separatists on the ground, or may probe capabilities or may seek to bait a response that justifies escalation.
Individually, each of these steps might irritate or harm the target. But together, they can destabilise a nation.
Real-world precedents abound. In 2014, unmarked Russian special forces (those aforementioned “little green men”) clandestinely seized key installations in Crimea while Moscow simultaneously flooded the information space with propaganda to confuse and paralyse response. This integrated approach allowed Russia to annex Crimea without open battle – military deception, local proxies, and info-war combined for a swift fait accompli. Similarly, Iran has blended cyber attacks with kinetic proxy strikes to pressure its adversaries: for example, Iranian hackers deployed the Shamoon wiper malware to destroy data on Saudi Arabia’s Aramco oil company servers (a cyber sabotage), while Iranian-aligned Houthi militants launched drones and missiles at Saudi oil facilities (a physical attack).3 Each tactic amplified the other’s impact on Saudi critical infrastructure and sense of security. China’s expansive use of integrated techniques is also noteworthy – from industrial espionage and influence operations to the kind of pre-placed network infiltrations discussed earlier. The 2023 Volt Typhoon operation, where Chinese operators penetrated Western power grids and communications, can be seen as one piece of a larger puzzle: those cyber accesses could be leveraged alongside psychological operations (e.g. spreading disinformation to sow panic during a crisis) and naval maneuvers in a coordinated campaign.
From the defender’s perspective, integrated campaigning means breaking down silos between agencies and domains. Intelligence must flow seamlessly and be integrated appropriately so that diplomats, cyber defenders, economic policymakers, and military planners share a common picture of threats and opportunities. If an adversary is conducting a whole-of-society assault – say, hacking media outlets, bribing officials, and massing troops on the border as intimidation – you need a whole-of-government response.4 This might involve, for example: law enforcement and tech companies shutting down bots and misinformation networks; pre-bunking mal-information or laying the groundwork to respond to leaks; hardening financial systems; counter-hacking and counter-espionage; diplomats rallying international support and attribution to impose costs on the aggressor; economic agencies using sanctions or trade controls to retaliate; and the military posturing to deter escalations.
A key concept for national AI strategy within integrated campaigning is the “AI Triad”5 — the data, algorithms, and computing power that comprise AI. These three components underpin a nation’s AI capabilities in every domain. Integrated campaigns increasingly rely on AI tools – whether for automating elements of cyber defence, gathering and processing intelligence, or guiding autonomous weapons – so ensuring strength in the AI Triad is already a security priority. For instance, superior data (including intelligence data or social media data) can make propaganda or surveillance more effective; better algorithms can outwit an opponent’s cybersecurity or optimise logistic flows in a conflict; more compute allows faster training of advanced models, which could confer an edge in everything from deciphering intercepted communications to simulating military operations.
However, the need for ‘Sovereign AI’ has grown as AI capabilities and entrenchment grows. As AI is embedded more deeply into an economy and critical infrastructure, chains of dependency become points for an adversary (or even ally) to leverage or choke. Nations are investing heavily across this triad (e.g. securing semiconductor supply chains for computing power, controlling data, funding AI research) because dominance in AI enables more coherent and potent action across every domain of competition. In practical terms, if your AI systems can collect and synthesise more multi-source intelligence faster, your whole campaign (diplomatic, military, etc.) becomes more agile. If your AI can generate highly persuasive deepfake content, your information operations gain traction against an adversary’s population. Integration and AI advantage thus go hand in hand.
To illustrate integrated campaigning with a concrete scenario, consider a fifth-generation warfare6 case study in electoral interference. (5GW is a term for conflict defined by distributed, information-centric operations targeting perceptions and societies, rather than solely military forces.) Imagine a hostile foreign power attempting to sway a democratic election in Country X: Months before the vote, AI-driven influence agents begin micro-targeting swing voters in Country X with tailored disinformation on social media – each voter might see AI-generated “news” articles or even deepfake videos aligned to their biases. As the election nears, the adversary escalates: its cyber units attempt to hack the voting infrastructure, maybe planting malware in voter registration systems to cause chaos on polling day. Concurrently, the adversary’s state media and diplomats push a narrative that the election in Country X is rigged (to undermine its legitimacy), and they quietly funnel funds to certain extremist parties or protest groups. This multifaceted approach – cyber, informational, economic subversion (funding), diplomatic narrative – is classic integrated 5GW. From Country X’s perspective, defending against this requires matching integration: deploying AI tools to detect deepfakes and bot networks, hardening election IT systems and having contingency plans (e.g. backup paper ballots), running strategic communications to inform the public about false narratives, possibly sanctioning the adversary or exposing their activities on the international stage, and coordinating with allies for support (e.g. intelligence sharing about the threats). Only a holistic response can counter a holistic attack.
In sum, Integrated Campaigning is about coherence and synchronization in an era where conflict is multi-layered. It’s fighting a chess game across multiple boards – and AI is quickly becoming a force multiplier on all of those boards. As we integrate AI into every facet of statecraft, we must also be ready to counter our adversaries’ AI-empowered campaigns. The next section delves deeper into AI itself, examining the high-end of AI capabilities (like potential superintelligence) and why the pursuit of ever more powerful AI systems raises a classic “race versus control” dilemma for global security.
Part 1: War is Not How Wars are Waged
Part 2: Persistent Competition and the End of the Peace–War Dichotomy
Part 3: Integrated Campaigning and Cross-Domain Synergy
Part 4: Frontier AI, Control Dilemmas, and the Race for Supremacy
Part 5: Offensive AI, New Weapons, and New Risks in Escalation
Part 6: Defensive AI, Resilient Infrastructure, and Safeguarding Society
Part 7: Conclusion (Navigating an Unseen Battlefield)
Part 8: Appendix — The Human-AI Relationship
Claim-Enforcement Patrols are approximately divisible into Rights-Protection Patrols and Freedom of Navigation Operations.
Freedom of Navigation Operations (FONOPs) are carefully scripted sorties by US and allied warships or aircraft that challenge “excessive” maritime claims; the Pentagon’s FY 2023 report records 29 such assertions against 17 states.
Beijing denounces these moves, claiming “there is never any issue with freedom of navigation in the South China Sea” Chinese Foreign Ministry, and instead mounts “rights-protection patrols” or “combat-readiness patrols” to reinforce its own claims, such as the recurring coast-guard blockade of Second Thomas Shoal (2023–24) and the PLA’s “Joint Sword” encirclement drills around Taiwan. When Chinese task groups transit international straits (Miyako, Feb 2025, or the Aleutians with Russian escorts, Aug 2023) they label the passage routine innocent or transit navigation, not a FONOP.
“Little green men”: A colloquial term for the masked, unmarked soldiers (without national insignia) who appeared during Russia’s 2014 annexation of Crimea. These were in fact Russian special forces, but the Kremlin initially denied it. By operating without official flags, these units sowed confusion and exploited legal gray areas. The tactic allowed Russia to seize territory while delaying and complicating any international response, exemplifying hybrid warfare. See: How, Why, and When Russia Will Deploy Little Green Men (2016) or From Little Green Men to Little Blue Helmets (2021) for more.
Shamoon malware: An aggressive computer virus (wiper) first deployed by Iranian-linked hackers in 2012 against Saudi Aramco, the world’s largest oil company. Shamoon erased data on around 30,000 Saudi Aramco computers, replacing it with an image of a burning American flag. It resurfaced in later attacks as well. The malware’s destructive payload and likely state sponsorship made it a hallmark of cyber warfare’s potential to cripple critical infrastructure without a single shot fired.
See: Simmons, A. (2019). “Iran’s Threat to Saudi Critical Infrastructure.” Center for Strategic & International Studies (CSIS). This details Iran’s use of cyberattacks like Shamoon against Saudi Arabia’s oil infrastructure as part of its asymmetric toolkit.
Ideally, you also need a sophisticated and educated populace; media control and population influence is a dangerous territory for government to wade into, and is typically an arena in which the government on wields unpopular, blunt tools. An intelligent, literate, and highly-educated population is (one hopes) likely to be a little more resilient to rudimentary outside influences.
AI Triad – data, algorithms, compute: A framework for national AI power highlighting that success in AI comes from a combination of abundant quality data, advanced algorithms (and research talent), and powerful computing infrastructure. According to AI policy analysts, these are the key levers a nation can strengthen to gain an edge. For example, the United States’ advantages include world-leading semiconductor companies (compute) and Big Tech datasets, whereas China leverages a huge population’s data and heavy government investments in supercomputing. Gaining superiority in one pillar (say, cutting-edge chips) can amplify capabilities if the other two are also robust. Integrated strategy means shoring up all three.
For more, consider Hendrycks, D. et al. (2023). “AI Triad and National Security.” Center for Security and Emerging Technology (CSET). Though readers should note this reference is representative of many broader conversations; the AI Triad idea is widely cited in policy and governance discourse.
Fifth-generation warfare (5GW): A proposed term for the latest evolution of warfare characterised by blurring of combatants and pervasive information manipulation. In 5GW, the target is not just enemy military forces, but the minds of the civilian population and the cohesion of society. Methods include psychological operations, cyber sabotage, and proxy insurgencies – often with no overt attribution. The “weaponization of narrative” is central. Whereas 4GW (fourth-gen warfare) involved insurgency and terrorism (non-state actors fighting states), 5GW can see states or other actors directly attacking the social fabric of another state, often from within, by leveraging local factions and information dominance. AI tools supercharge 5GW by enabling more convincing fake content, micro-targeted influence, and automated disruption at scale.
