Is AI thickening the fog of the Israel-Hamas war?

 Scenes of Israel-Hamas conflict with a mask.
Scenes of Israel-Hamas conflict with a mask.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Perhaps no one is more closely associated with the phrase "the fog of war" than Robert McNamara, who would, in the Academy Award-winning documentary of that name, expound on the lessons he claimed to have learned as Secretary of Defense during the Vietnam War. "The fog of war," McNamara claimed, means that "war is so complex it's beyond the ability of the human mind to comprehend all the variables" — a comprehensive inadequacy that is fundamental to human nature, McNamara explained, and which ultimately causes militaries to "kill people unnecessarily."

While McNamara's "fog of war" is specifically in regards to military decisions made in the field, the sense that war inherently obscures and elides easy classifications has expanded the popular understanding of the term to describe the much broader state of confusion and uncertainty experienced in — and often exploited during — moments of conflict, regardless if the person experiencing it is in the field, or simply observing from afar.

It is through this expansive fog of war that much of the world has watched the ongoing war between Israel and Hamas in Gaza, where all sides, as well as external actors, have deployed dueling narratives designed to ossify sentiments and inflame supporters to their cause. Nowhere is this more apparent than on social media, where shaky video clips and grainy photographs make indelible impressions long before accurate vetting and crucial context can catch up. Advocates have long argued that artificial intelligence services can be a boon for digital efficiency and clarity in a rapidly complexifying world, while the proliferation of deepfakes, chatbots, and other AI projects has already shifted how people ingest their online news.

So can AI help, or hinder, efforts to peer through this current fog of war?

What the commentators said

This current war has "spawned so much false or misleading information online — much of it intentional, though not all — that it has obscured what is actually happening on the ground," The New York Times reported, noting that advances in AI tech "are already compounding that digital cacophony" in what's become an "authenticity crisis" across multiple social media platforms.

Not necessarily so, cautioned The Poynter Institute for Media Studies, who described this latest conflagration as the "first real test of experts’ warnings about the threat of generative AI." That hypothesis "has so far remained unproven," with a recent study from the Harvard Kennedy School's Misinformation Review suggesting that "nowadays, journalists and fact checkers struggle not so much with deepfakes but with visuals taken out of context or with crude manipulations, such as cropping of images or so-called 'cheapfakes.'" To the extent that AI has been used to thicken this current fog of war, "it would either be fake audio (which has got significantly easier to make) or claims of AI used to dismiss real content (either image or audio), which we’re seeing frequently," Sam Gregory, executive director of the human rights nonprofit WITNESS, told Poynter.

To Gregory's second point, uncertainty about AI's ability to assess real images has led to a "second level of disinformation,” University of California Berkeley professor and digital image expert Hany Farid told 404 Media. In particular, a widely circulated image of an Israeli child's burnt corpse has been flagged as being computer generated by the free AI or Not service — and held as evidence that Israel is faking war crimes allegations — despite not having the telltale signs of digital manipulation according to Farid.

With "dozens of these tools out there," and  "half of them say real, half say fake, there’s not a lot of signal there," Farid explained.

What next?

Newsrooms should be "putting [...] protections in place" to accurately assess and identify digitally manipulated or created footage, CBS News CEO Wendy McMahon told Axios, claiming just 10% of the 1,000 videos of the Israel-Hamas war her network had received were usable for air.

The challenge doesn't seem to be dissipating anytime soon. AI has been used to "essentially amplify the distribution or dissemination of terrorist propaganda,” FBI Director Chris Wray said during an intelligence agency summit, citing translation tools making messages “more coherent and more credible to potential supporters”

Ultimately, "the cost of sharing a deepfake is part of the war, “ Israeli deep-fake researcher Michael Matias told The Times of Israel "We are at the start of a deepfake revolution."