Imagine wars fought by swarms of unmanned, autonomous weapons across land, air, sea, space and cyber. The autonomous weapons use artificial intelligence-based algorithms to make decisions, advanced sensors to maneuver and pinpoint precise vulnerabilities in targets and offensive and defensive cyber capabilities – all in real-time and independent of human decision making. 

This is the emerging nature of warfare, and it’s portrayed via a series of vivid vignettes in a recent article in the U.S. Naval Institute’s Proceedings Magazine article, “On Hyperwar,” authored by U.S. Marine Corps Gen. John R. Allen (ret.) and Amir Husain, CEO of AI company Spark Cognition and author of the forthcoming book, The Sentient Machine. Fifth Domain recently caught up with Husain, who gave an in-depth interview on this concept and its implications for near-future warfare.

Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications. Allen and Husain, in “On Hyperwar,” identify cavalry, the rifled musket and Blitzkrieg as three historical examples.

For Allen and Husain, the advent of hyperwar signals the next “fundamentally transformative change” in warfare. The term hyperwar has been used historically with different denotations, which Allen and Husain trace, but the authors have redefined and adopted the term to describe this “AI-fueled, machine-waged conflict.” Allen and Husain wrote:

What makes this new form of warfare unique is the unparalleled speed enabled by automating decision making and the concurrency of action that become possible by leveraging artificial intelligence and machine cognition. … In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these developments are many and game changing.

“On Hyperwar” details some of these changes, which include the concepts of infinite, distributed command & control capacity, concurrency of action/perfect coordination, logistical simplification and instant mission adaptations.

In his interview with Fifth Domain, Husain said he has been interested in AI since a young age. More recently, he has been intrigued by applications of autonomy in weapons systems. Running an AI company, Husain is at the forefront of applying new AI-driven technologies. For instance, he compared the company’s SparkPredict AI-based machine failure prediction system to building R2D2 from Star Wars. He compared the company’s DeepNLP (natural language processing) system to building C3PO from Star Wars.

“What was science fiction a few years ago is becoming fact now,” Husain told Fifth Domain.

Husain and Allen started discussing the concept of hyperwar a few years ago. Both men had been thinking independently about the increasing degrees of autonomy in war – Allen from a strategic perspective and Husain from a scientific perspective. Husain had been working for years on defensive and offensive autonomy in cybersecurity when he met Allen. These mutual interests led to one of their earliest discussions on the concept of hyperwar, and the two have continued collaborating.

Below is a full transcript of the Fifth Domain interview with Husain, which he gave earlier this month.

Will hyperwar force us to rethink the concept and strategy of deterrence? If so, how?

In a nutshell, yes, I believe so. I say this because the basic calculations that go into determining a minimal, credible deterrent might well shift. For example, how many enemy ballistic missiles can our BMD [ballistic missile defense] systems be expected to intercept? These hard calculations will change in the era of hyperwar.

Once opponents incorporate AI into guidance systems and deploy counter-BMD AI, the numbers that constitute a deterrent will change. And that’s at the high end. The types of protection we have to afford to strategic systems, such as a Patriot battery, will also change. So far, we’ve only heard of militias flying remotely piloted commercial drones into radars, a clever strategy designed to temporarily disable a SAM [surface-to-air missile] system. Now, imagine what happens when these drones are autonomously controlled.

Finally, consider the aircraft carrier, the big stick, which has been an important instrument of foreign policy for the United States since World War II. It was impossible for most adversaries to seriously challenge an American Carrier Strike Group. But with AI-powered cruise missiles, autonomously controlled swarms of small, fast moving hydrofoils, an array of UCAVs [unmanned combat aerial vehicles] deployable in swarms and sophisticated information processing algorithms that extend detection and forewarning capabilities, will this continue to be so?

Will hyperwar change the traditional notions of military superiority/dominance and military advantage? If so, how?

We’ve long expected that the next near-peer conflict will begin with a cyber salvo. But in the hyperwar context, these salvos can be machine-initiated and -controlled and thus occur at an expanded scale, increased speed and broader scope.

Hyperwar is essentially about the leverage of AI and machine autonomy to shrink the OODA cycle to nothingness... so tight that it becomes almost impossible to keep humans in the loop in most places. Commanders can continue to supply intent, but the prosecution of much of the war can conceivably shift to machines.

As I mentioned earlier in context of the deterrence question, traditional military superiority and dominance will have to be re-evaluated. A fourth-generation fighter is expected to dominate against a third-generation threat when both are piloted by humans. But when you strip the third-generation fighter of its cockpit, implement sophisticated autonomy and no longer have to pay heed to the physical constraints dictated by the man in the cockpit, outcomes may change.

Consider also that a big source of military advantage is not about equipment, but about training and the quality of your human resources. High levels of training and readiness require immense amounts of capital in the traditional context. The U.S., being the richest country in the world for so long, had a natural edge in this department. But autonomous algorithms don’t need that type of time or expense. It is not too long before these algorithms will be evolved in synthetic, simulated environments and then deployed in the real world. When an inferior foe can field highly skilled “pilots” that never tire, don’t need training and exhibit none of the biological constraints of a human pilot, what becomes of training-enabled advantage? It shrinks, in my view.

How will predictive analytics influence the start and conclusion – or, as Clausewitz would say, the suspension – of wars, and what role will humans play in those decisions?

Statistical simulation methodologies have long been employed in military planning. With Artificial Intelligence systems, the granularity at which outcomes can be modeled will become finer. In general, predictive accuracy will increase. AI can be applied to every stage of the war and almost every activity.

Gen. Allen and I spent quite a bit of time coming up with areas where today’s AI might drive significant planning and execution advantages. There are dozens upon dozens of opportunities we’ve identified. For example, prior to the initiation of hostilities, AI can play a major role in how intelligence is processed, and how much raw data – imagery, text, advanced sensor inputs – can be analyzed. We’ve got the capability to deploy a huge number of sensors, but the way we process this data is bandwidth constrained. We still need humans to do a large amount of grunt work. By moving to more and more sophisticated autonomous information processing systems, we’ll increase our capacity to analyze information streams. More accurate and holistic intelligence appraisals are at least one way in which AI systems will influence the war before it’s even begun. As for humans, choosing to initiate hostilities is a decision that rests with them.

How will differing “norms” for warfare, perhaps encoded in algorithms, affect thinking on and strategy for military defense?

This is a massively complex topic, but I’ll share a bit of my perspective. One of the advantages of employing autonomous systems in war is that the SOPs that optimize warfighter safety can be tweaked to err in favor of reducing accidental casualties.

To clarify your use of the term “norms” of warfare, let’s first establish that manned and unmanned systems must comply with the laws of armed conflict. Any entity that fields systems that violate these laws must be prosecuted and punished per international law. That said, in order to minimize collateral damage, autonomous systems can take even greater risks to their own safety than these laws call for.

One application that can potentially make warfare more effective and yet, less deadly, is the use of swarming autonomous systems to seek out a specific target, validate presence to a high degree of confidence and attack with incredible accuracy, but limited firepower. Taking out a specific terrorist commander, for example, may only involve eliminating that one individual, not risking innocents in whose vicinity he hides. That level of risk, validation and precision is hard to pull off with manned systems or even with remotely piloted, non-swarm systems. In the future, this may change.

It is common now to see technology advancing faster than human-derived systems can keep up – for example, legal, regulatory, ethical, etc. Which human-derived military systems must be rethought and revamped, or altogether discarded, given the imminent age of hyperwar?

I believe we’ll see increasing levels of autonomy in surveillance and reconnaissance systems as remotely piloted systems begin to morph into autonomous entities. Pulling the trigger may still require a human in the loop for the foreseeable future particularly in low-intensity conflict or during times of overt peace. However, keeping these systems aloft and prosecuting non-kinetic missions will become an increasingly automated process. The information they transmit back will also be interpreted with AI-powered common intelligence picture systems.

While all wars are unique, and each uniquely complex, what will be the single or few critical factor(s) that will likely determine victory in hyperwar?

1. Assimilating and fully understanding the asymmetric impact of autonomy technologies and reflecting this in planning.

2. Investing in training, in terms of enabling warfighters to fully leverage the technology of hyperwar and to develop counters for enemy employment of these systems.

3. Finding and optimizing the precise balance of man/machine integration. Too little human control, too soon, and we risk compromising transparency and safety. Too much human control, and we’ll suffer at the hands of tight enemy OODA loops.

In which technological areas should the U.S. government prioritize funding to retain current military superiority?

Broadly, artificial intelligence. Specifically, machine vision, natural language understanding, explainability, autonomy algorithms, AI-assisted logistics and planning, automated knowledge management and retention systems and AI-enabled prescriptive/predictive maintenance, to name a few.

Aside from AI, continued sensor development – a key input to smart algorithms – next-generation propulsion technologies to enable both efficiency in conventional systems and to enable practically deployable hypersonic vehicles, space-based systems, autonomous platforms – aerial, surface and sub-surface – robotics and investments to enable higher degrees of mobility.

Which traditional military capabilities will hyperwar make obsolete?

I’d rather not reflect on this in too much depth, but I will highlight a couple of areas. The U.S., thankfully, has the world’s most powerful navy, but there are many countries that employ non-networked surface ships with minimal air defense capability. Pre-hyperwar, these assets wouldn’t necessarily be highly survivable, but in the age of swarming, low-cost autonomous drones and unmanned sub-surface vehicles, you might as well never have these ships leave port. They could be disabled with minimal expense and risk.

The employment of offensive cyber systems will rapidly render useless sensors and air defenses fielded by less sophisticated foes. The traditional SEAD [Suppression of Enemy Air Defenses] mission and use of stealth jets may in some cases be obviated by a cyber payload putting a SAM site out of commission without a shot being fired or a single life being risked.

What are the key takeaways on hyperwar you would convey directly to policymakers and military leaders?

First, hyperwar is a consequence of the militarization of artificial intelligence. It is here and will only become more significant with time. We must understand it and factor the implications of broad, widely deployed autonomous systems into our planning and our thinking.

Second, technology has always played a critical role in war, but as we experience a ride on the exponential – nearly asymptotic – technological curve, the rate and extent to which it will impact outcomes in conflict will continue to increase. We must be the best in AI research. We must be the best in the employment of AI systems. The Chinese government recently published their AI plan, with a stated goal to be the dominant AI power by 2030. The dominant AI power won’t just dominate the field of AI. Software is eating the world, and AI is eating software... dominance in AI will translate into economic advantage and dominance in the battlefield.

Third, the threats we face in the cyber dimension are about to get more complicated. They will expand beyond information theft, doxxing and cyber physical attacks to mass psyops implemented using autonomous systems. We are about to see an acceleration in cyber capability development, which is being enabled with the application of offensive and defensive AI. This means that we are vulnerable even in times of peace to new and unique threats, which we have not confronted at scale. How many would have guessed that analytics systems and Twitter bots would be used to influence an election and attack the foundation of our democracy? We need to develop a strategy and response framework to deal with the broad spectrum of threats we are likely to see in this new reality.

Clausewitz famously wrote in On War: “The objective nature of war makes it a calculation of probabilities. Now there is only one element still wanting to make it a game. That element is chance. …But together with chance, the accidental, and along with it, good luck, occupy a great place in war. …The absolute, the mathematical, nowhere finds any sure basis in the calculations and the art of war. From the outset, there is a play of possibilities, probabilities, good and bad luck… which makes war, of all branches of human activity, the most like a gambling game.” Will hyperwar make Clausewitz’s characterization of war obsolete?

No. War – like any large scale, complex human affair – will certainly continue to be about probabilities. But the purpose of the planner and the commander is to prosecute the war in a manner which minimizes the likelihood of bad outcomes. The technologies of hyperwar – automated information analysis, broader and deeper intelligence capabilities, low-risk/low-cost autonomous reconnaissance, autonomous cyber systems, higher degrees of precision, to name but a few, will equip the commander with a powerful set of tools to bend and shape the course of the war so as to make positive outcomes more likely.