Ex-Gen. Stanley McChrystal: AI weapons ‘frightening,’ ‘will’ make lethal decisions

  • Oops!
    Something went wrong.
    Please try again later.

The chief software officer at the Pentagon, Nicolas Chaillan, suddenly quit last month over concerns that the U.S. military had fallen "15 to 20 years" behind China in cyber warfare and artificial intelligence, he told the Financial Times.

The warning marks the latest sign of discord within the U.S. military over how to prepare for what former Google executive Kai-Fu Lee calls the "third revolution" in warfare, after gunpowder and nuclear arms.

In a new interview, ex-General Stanley McChrystal — who led coalition forces in Afghanistan for two years and now heads a consulting firm called the McChrystal Group — said artificial intelligence will inevitably come to make lethal decisions on the battlefield. However, he acknowledged the "frightening" risks of potential malfunction or mistake.

"People say, 'We'll never give control over lethal strike to artificial intelligence,'" says McChrystal, who recently co-authored a book entitled, "Risk: A User's Guide." "That's wrong. We absolutely will."

"Because at a certain point, you can't respond fast enough, unless you do that," he adds. "A hypervelocity missile, hypersonic missile coming at the United States aircraft carrier, you don't have time for individuals to do the tracking, you don't have time for senior leaders to be in the decision loop, or you won't be able to engage the missile."

A ban on autonomous weapons has drawn support from 30 countries, though an in-depth report commissioned by Congress advised the U.S. to oppose a ban, since it could prevent the country from using weapons already in its possession.

In 2015, prominent figures in tech like Tesla (TSLA) CEO Elon Musk and Apple (AAPL) co-founder Steve Wozniak, as well as thousands of AI researchers, signed an open letter calling for a ban on such weapons.

President Joe Biden, speaking at a summit of U.S. and European leaders in February, called for international collaboration to "shape the rules that will govern the advance of technology and the norms of behavior in cyberspace, artificial intelligence, biotechnology so that they are used to lift people up, not used to pin them down.”

The increasingly sped-up pace of warfare will require U.S. military officers to cede decision-making power to artificial intelligence, McChrystal said. But that brings risks, he noted.

"You've created technology, you put in processes for it to operate, but then to operate at the speed of war you're essentially turning it on and trusting it," he says.

"That can be pretty frightening, particularly if the potential of malfunction or spoofing or any of those other things are in," he adds.

Soldiers surround a Titan Strike unmanned ground vehicle. (Photo by Ben Birchall/PA Images via Getty Images)
Soldiers surround a Titan Strike unmanned ground vehicle. (Photo by Ben Birchall/PA Images via Getty Images)

McChrystal, who graduated from the U.S. Military Academy at West Point in 1976, served a 34-year military career that included a stint as the commander of U.S. special forces and ultimately, a two-year tenure as the head of coalition forces in Afghanistan that ended in 2010.

Then-president Barack Obama accepted McChrystal's resignation days after a Rolling Stone article in which McChrystal and aides criticized senior administration officials.

Speaking to Yahoo Finance, McChrystal warned in general of the power taken up by AI systems when organizations do not fully understand their capabilities.

"It's hard to have a complete understanding of, in a modern organization now, what decisions are actually being made algorithmically and what are being made by people," he says.

"When you don't have that, I would argue you have the risk of no longer having real understanding of control of your organizations," he adds.

Read more:

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, YouTube, and reddit.