Drone attacks: When war machines work, and when they don’t

Down the hallway from where I work in the history department at West Point, cadets get to test out all kinds of new robotic machines. Occasionally, one of these contraptions comes crawling by my door, reverses itself and then goes the other way. You have seen them; they are the same kind of robots that were featured in the movie Hurt Locker, crawly erector set creations that the soldiers sent out to defuse IEDs. At West Point, their hallway visits are amusing, as you might guess, but the subject itself is not. Warfare is rapidly turning to robotics, raising all kinds of ethical, moral, and constitutional questions.

The most visible robotic project right now is the use of drone aircraft. Unmanned vehicles are the wave of the future. In fact, the aerospace industry has essentially stopped all research and development on manned aircraft. Going forward, air force “pilots” will be guys sitting at computers directing drones on targets thousands of miles away. In fact, this is not just the future; it has already been happening for several years now.

In a small book published last fall (Predator: The Remote Control Air War Over Iraq and Afghanistan: A Pilot’s Story), Air Force Lt. Col. Matt Martin described fighting the war in Iraq from his desk at Nellis Air Force base near Las Vegas, a healthy 7,500 miles from Baghdad. Morris would drive to work like any commuter, examine the morning email and then send a drone onto a target on a Baghdad street before picking up some milk and heading home to the family. A modern warrior of the kind that we have long worried would emerge in the day of video games.

The benefit of drone warfare is obvious: for the side using the drone, there is little risk to life and limb. After all, you are sending a machine to do your bidding. But this benefit to war is also one of its potential drawbacks. The less risk war requires, the more likely we might be to engage in it. Then, too, robotic warfare is inherently more secret than more overt forms of conflict, raising issues of accountability. And, as anyone who has an imagination bent towards science fiction could tell you, there is the nagging feeling that this could all get out of hand. As robots become more and more effective at striking targets, as these machines develop the ability to think on their own, as we begin to do the calculation that our soldiers lives are safer when a robot is in charge than when a fallible human is, well, you know, it all feels a bit like a Ray Bradbury novel come to life.

Drones are just a small part of the Army’s emerging robotic capability. You may be familiar with the BigDog, a quadruped robot that looks like a gigantic scorpion and can run through tough terrain at four miles an hour carrying 340 pounds. New adaptations of this technology will soon allow the BigDog to ford streams and scale inclines. The zoomorphic approach to military technology is about to go much further. New forms of drones are being tested to operate in coordinated units, acting, sounding and even looking like a swarm of hummingbirds or a gaggle of seagulls as they descend on their unsuspecting targets.

As with many modern medical technologies, these new warrior tools are arriving faster than we can adapt our ethical and moral consciences to respond to them, and long, long before we can agree on civilized rules for their use. In another notable recent book (Wired for War: The Robotics Revolution and Conflict for the 21st Century), the Brookings Institution’s Peter Singer points out that soon law enforcement agencies will have these same capabilities, issuing speeding tickets from positions hovering over the Interstate, raising constitutional issues of privacy.

In war, where proportionality has always been a guideline of “civilized” conflict (it is considered beyond the rules of war, for instance, to not appropriately scale your strike to the size and nature of the target), drones and other robotic systems raise new issues of fairness. In Afghanistan, for instance (where the Taliban has no air force), drone strikes and other forms of robotic warfare are, as Dennis Blair, director of national intelligence recently told the New York Times, “bitterly resented” since the enemy there “cannot duplicate such feats of warfare without costs to its own troops.”

Too bad, you may sarcastically say. And, indeed, many observers think of robotic warfare as nothing more than a military advantage we have fairly developed and ought to use like any other weapon. But proportionality is not only a moral issue, it is also a tactical issue. When an enemy is “defeated” not by our skill as much as by our inherent technical superiority, it foments a feeling among the vanquished that no clear victory was won. As at least one recent study speculates, the broader political effects of the drone campaign in Afghanistan may be encouraging an insurgent feeling among the local population equal to or greater than the tactical advantage that the drones offer. If so, it will not be the first time that America has discovered that a comparatively primitive opponent, overmatched by our sheer firepower, can nonetheless meet or even defeat us through its incomparable will.

The simple lesson may be that for all our science, we still need to remind ourselves that war is a human activity aimed at achieving a political mission among humans. New forms of weaponry give us a technical advantage that may be unbeatable on the battlefield, but even with such superiority the mission – achieving a durable peace and a political result – may remain elusive.

Todd Brewster is the Director of the National Constitution Center’s Peter Jennings Project and the Center for Oral History at West Point.