The Military’s Recruitment of AI Has Already Begun

Photo Illustration by Erin O’Flynn/The Daily Beast/Getty Images and Public Domain
Photo Illustration by Erin O’Flynn/The Daily Beast/Getty Images and Public Domain

On Aug. 10, the Department of Defense announced it was launching a task force to look into generative AI—programs like ChatGPT, DALL-E, and others—to produce finished work like code or answer questions or develop specific images when asked to do so. The announcement is part of the U.S. military’s ongoing effort to keep pace with modern technologies, studying and incorporating them as they prove useful, while at least taking some time to determine what risk the use of AI for military purposes poses.

AI is an ungainly catch-all term for a family of distantly related technologies, but it’s nevertheless being heavily pushed onto consumers by Silicon Valley techlords who are convinced they’ve found the next big thing. As governments and especially militaries follow suit, it’s important to ask the question: What, if anything, can AI offer for understanding and planning war?

Algorithmic analysis, especially that based on language learned models (like what ChatGPT is based on), has been heralded as a way for computer processes to use training data and respond to new circumstances. When people express fears of “Killer Robots,” that fear is focused on the tangible: What if AI lets a robot with a gun select who to kill in battle, and gives the robot the speed and authority to pull the trigger? Algorithms can fail in ways that are opaque and unpredictable, leading not just to error on the battlefield, but novel error.

How Russia Uses American Businesses to Steal U.S. Military Technology

And military interest in AI won’t remain confined to the tactical or battlefield level. The Pentagon’s expressed interest in generative AI is expansive.

“With AI at the forefront of tech advancements and public discourse, the DoD will enhance its operations in areas such as warfighting, business affairs, health, readiness, and policy with the implementation of generative AI,” the Chief Digital and Artificial Intelligence Office said in statement about the announced generative AI task force.

So how should we expect to see the military pursue AI as a new tool? Two recently published academic papers offer perspective to help understand the shape and limits of AI, especially when it comes to policy.

Predicting the Next Battle

One of the most vexing challenges facing a state and its security forces is predicting when and where battles will occur. This is especially true when it comes to fighting against non-state actors—armed insurgencies may operate from within a geographic expanse, but strike at targets of opportunity throughout the area that can reach.

In a paper entitled “Discovering the mesoscale for chains of conflict,” published on Aug. 1 by PNAS Nexus, authors Niraj Kushwaha and Edward D. Lee, both of the Complexity Science Hub of Vienna, Austria, created a model that inputs existing conflict data, matches it to time and space, and can then be used to predict how previous incidents will cascade into larger waves of clashes and fighting.

Kushwaha and Lee started with public data on political violence incidents recorded by the Armed Conflict Location & Event Data Project, constrained to just events in Africa and from 1997 through 2019. The authors then matched that data across grids of space and slices of time. A battle in one place in the past was a good sign there would be new battles in adjacent or nearby locations in the future, depending on the time scales chosen for a given query.

“In a way evocative of snow or sandpile avalanches, a conflict originates in one place and cascades from there. There is a similar cascading effect in armed conflicts,” Kushwaha said in a news release. One such example of the model’s insight is how it identified how violence from Boko Haram in Nigeria displaced herders, leading to further conflict on the periphery of where Boko Haram operates. But the model can also identify events linked to a different group, the Fulani militia. These forces, which can be seen as distinct groups, can both take advantage of a strained government response to any of a number of insurgencies in the country, and lead to cascading violence in the future.

By repeating the process across other conflicts, the authors found that different events in the same place can be traced to different conflicts. By just using the model at hand, they were able to find and connect later violent incidents to earlier ones, inferences that can be found in the data, but hard to parse without a model of conflict cascade teasing it out.

The promise of bringing big data and algorithmic analysis to data sets like this is that the models built can spot connections otherwise invisible to human perception. While much of Kushwaha and Lee’s work is built on more reproducible algorithmic tools, the others are keenly aware that AI offers further depth for such research.

How Technology Made the U.S. Military Its Own Worst Enemy

The conflicts unfold in this model as avalanches, from an origin to cascading violence that spills forward in time and across geography. Kushwaha and Lee suggest their model may be useful for connecting other snowballing how other social factors, like unrest, migration, and epidemics, factor into conflict and its spread. In addition, these avalanches can be used to train machine learning algorithms, creating a tool that can be used to look for similar connections across a range of scales, and with more variables added in. Because the process works on scales from small protests to large battles, it has broad application as a tool for studying spreads of conflict.

One of the most immediate possibilities for such work is creating, with training data and algorithmic insight, a way for countries to adapt to and predict the violence of insurgencies shortly after a conflict breaks out. Here, the tool still requires some data from the conflict before it can start modeling, but any speed in assessment could help a country better deploy its military or other assets with an eye towards winning the fight early.

Command and Control

If algorithmic tools can be used to predict conflicts cascading from one battle to another, it is reasonable to wonder if an AI tool could be built not just to track conflict, but to allocate tools for winning it. That, argues Cameron Hunter and Bleddyn E. Bowen at the University of Leicester, would be a mistake.

We’ll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of war,” their impressively named paper published Aug. 7 in the Journal of Strategic Studies reads. It is about the real limits of AI decision making for the kinds of choices commanders are called upon to make in war.

At the heart of the matter is the kind of reasoning employed by narrow AI, the kinds of decisions required of commanders, and the divergent relationship those reasonings have to information.

“Command decisions in tactics and strategy—and rounded advice in making those decisions—require multiple kinds of logical inference, and the good judgment to know when to use each one,” Hunter and Bowen wrote. “Command decisions at their heart require judgment, which is something AI technologies cannot do—machine learning, ‘narrow’ AIs can only calculate.”

At its core, the paper seeks to explore the chasm between inductive versus abductive reasoning, and how this will affect the role of AI in modern warfare. Inductive logic makes decisions based on probabilistic inferences, like assessing the sum total of all chess moves and picking the next move that leads to the most winning board states. Abductive reasoning is based on observation and made in the absence of perfect information.

Narrow AI, from chess competitor Deep Blue to DARPA-designed AI pilot Alpha Dogfight, are capable of winning games against human opponents. But their achievements take place in structured, rule-bound environments. “Fog of war,” or the inherent unknowability of all enemy positions and actions, is completely absent in board games like Chess and Go. Instead, players have perfect information of where all pieces are at all times, and know the rules of where pieces can end up in the future. Even the aerial duels of Alpha Dogfight are bound by knowables: the planes involved, the conditions determined by which an AI-piloted plane can “win” the battle.

“The key selling point of AI commanders or advisors—rapid computation—is therefore moot because war is logically ‘undecidable’ and cannot be resolved by computing power and datasets alone,” write Hunter and Bowen. “The notion that there are objectively ‘correct’ choices in strategy that are enacted on a battlefield looks particularly kitsch in light of a historical record filled with examples of unexpected or non-battle centric routes to defeat and victory.”

In their paper, Hunter and Bowen focus on command decisions, or those made about how to achieve victory in the absence of information. Throughout history, commanders have won by inferring what could be happening, and acting based on those assumptions. Removed from the finite rules and conditions of a came, inductive AI can only pursue actions already coded as leading to victory, and will be unable to adapt to surprise.

Why AI Won’t Be Replacing Teachers Anytime Soon

“We believe that narrow AI will in fact remain a halfwit tactician as well as a ‘moron strategist’, because tactics is also the thinking part of warfare and requires the same kind of logic as strategy and politics. Being good at Chess does not make an AI good at devising a plan to storm a redoubt,” Hunter and Bowen concluded.

This is not a problem that can be solved with better algorithms or greater computing power, as it hinges on traits entirely outside the ability of AI. War is a fundamentally human endeavor, and while there are processes within it that can benefit from automation, the specifically political nature of ending a conflict is likely to elude any machine, especially ones trained to secure victory through quantifiable means.

Known Unknowables

When nations go to war, they do so with human institutions, built up of interlinked thinking parts, all coordinated through chains of command and deference. Generals observe war at a different scale than squad leaders and presidents, though all are assumed to be working towards the same end: the precise application of violence needed to resolve a conflict in their favor. This is a process that generates a tremendous amount of data, from the automatically generated geo-coordinated flight logs of modern battlefields to the industrial-scale assessments of anti-aircraft fire on returning bombers.

AI tools offer a means to understand this data in a useful way, from assessing maintenance needs based on undiscovered correlations to predicting where an insurgent army may strike next. AI tools will likely be developed and assigned in places where speed, especially, is crucial for an operating system. Bowen and Hunter point to the Aegis Combat System, an automated defensive weapon that coordinates sensor data and interceptors to protect ships from incoming rocket and missile fire, as one such example of rules-bound AI serving a useful purpose and already deployed.

Boston Dynamics and Other Tech Companies Promise Not to Weaponize Their Robots

Outside of situations where the rules are clear, like protecting a destroyer from incoming hostile fire within set parameters, AI tools will struggle to offer what commanders and soldiers need. While it is common to quip that generals are stuck trying to fight the last war, AI may be incapable of anything but. The same process that works on a board game with perfect information is bound to struggle the moment it falls under attack from a dimension it had not considered.

As the Pentagon prepared to adopt AI to ease operations, it has never been more important to understand what AI cannot do. Otherwise, human commanders trusting in AI tools are in for a surprise they should have seen coming.

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.